Ananny argues that algorithmic errors should not be treated as isolated technical malfunctions, because they expose the social, institutional and political conditions through which computational systems are designed, deployed and judged . The article’s central proposition is that to see like an algorithmic error is to interpret mistakes as sociotechnical events: failures produced not only by code, datasets or statistical thresholds, but also by organisations, business models, regulatory gaps, institutional values and unequal power to define what counts as success or harm. Rather than asking whether an algorithm merely “works”, Ananny asks who is authorised to name an error, whose injury becomes visible, and whether a failure is framed as a private glitch or a public problem. The case study of remote proctoring during the Covid-19 shift to online education illustrates this argument with particular force. A facial detection system used in exam surveillance produced higher error rates for darker-skinned students, while also presuming that all students could access quiet, visually controlled domestic environments. What initially appeared to be a technical bias in face detection therefore revealed a wider structure of racial, socioeconomic and pedagogical inequality. Ananny’s broader contribution is to insist that algorithmic mistakes can become democratic resources when they are analysed expansively rather than debugged narrowly. Consequently, algorithmic accountability requires more than accuracy improvements; it demands public scrutiny of the systems, assumptions and institutions that decide which failures matter, who must endure them and what forms of repair are imaginable.