Dynamics Movement System

A field that does not move stagnates. The DynamicsMovementSystem names the kinetic infrastructure through which a corpus maintains its internal motion: not the content of its concepts, but the energy that drives them into new configurations. In physics, dynamics is the study of forces and motion. In Socioplastics, it is the study of conceptual forces: the pushes and pulls that drive concepts into new combinations, new scales, new applications. The DynamicsMovementSystem identifies the forces at work in the corpus: the gravitational pull of heavily cited concepts, the centrifugal force of scalar expansion, the friction of disciplinary boundaries, the inertia of established formulations. These forces are not metaphors. They are structural operators. A heavily cited concept exerts gravitational pull on adjacent nodes, drawing them into its orbit. A scalar expansion generates centrifugal force, pushing concepts toward new magnifications. A disciplinary boundary generates friction, slowing cross-field movement. An established formulation generates inertia, resisting transformation. The DynamicsMovementSystem makes these forces explicit. It allows practitioners to calculate the energy required to move a concept from one configuration to another. Node 1509 places this concept in Core III because dynamics is one of the seven integrated disciplines. But the system is not about physics. It is about the physics of conceptual motion. Without this concept, the field is static. With it, the field is kinetic.

 

Morphogenesis Growth Model


A field grows, but not like a crystal. It grows like an organism. The MorphogenesisGrowthModel names the developmental logic through which a corpus expands: not by accretion of identical units, but by differentiation of structural forms in response to internal and external pressures. In biology, morphogenesis is the process by which an organism acquires its form. In epistemology, it is the process by which a field acquires its shape. The Socioplastics corpus did not begin as 3,000 nodes. It began as a single concept — the socioplastic itself — and differentiated over seventeen years into the current architecture. The MorphogenesisGrowthModel makes this process explicit. It identifies the stages of field development: the initial concept, the first differentiation into thematic clusters, the emergence of scalar operations, the hardening of structural elements, the integration of disciplinary fields, and the executive mode that governs the mature corpus. Each stage is not merely chronological. It is morphological. The field's form at each stage is determined by the interactions between its existing structure and the pressures acting upon it. Node 1508 places this concept in Core III because morphogenesis is one of the seven integrated disciplines. But the model is not about biological development. It is about the developmental logic of epistemic infrastructure. The field is the organism. Its growth is the subject. Without this concept, expansion is understood as addition. With it, expansion is understood as differentiation.


L’Internationale Online (2016) Decolonising Archives. L’Internationale Books.

 Decolonising Archives presents the archive not as a neutral storehouse of cultural memory, but as a contested epistemic infrastructure through which power is organised, legitimised and resisted. The publication argues that coloniality survives beyond formal colonial rule through archival ownership, classification, access regimes, digitisation policies and market-driven extraction. Its central proposition is therefore double: archives must be protected from commodification as cultural capital, yet also radically rethought as sites where Western classificatory systems are exposed as instruments of imperial domination. The volume’s visual materials reinforce this argument: the cover image of Guinean women from unfinished revolutionary film footage, and the archival maps and visual-resistance materials reproduced in the Red Conceptualismos del Sur essay, show that archives are not inert remnants but unfinished political struggles. As a case study, RedCSur’s work with Latin American conceptualist and resistance archives demonstrates a decolonial practice grounded in preservation, collectivisation and activation rather than mere visibility; its Archives in Use platform seeks to return precarious artistic-political materials to public circulation without surrendering them to state violence or market neutralisation. Fraser and Todd’s essay on Indigenous research in Canada deepens the critique by insisting that decolonising state archives can only ever be partial, since such institutions remain bound to settler-colonial nation-making. Ultimately, the collection concludes that decolonial archival work is not simply a matter of digitising documents, but of transforming the conditions under which memory becomes knowledge, evidence, commons and political action.

Bottino, F., Ferrero, C., Dosio, N. and Beneventano, P. (2026) Retrieval Is Not Enough: Why Organizational AI Needs Epistemic Infrastructure. arXiv:2604.11759v1.

 Bottino, Ferrero, Dosio and Beneventano’s Retrieval Is Not Enough argues that the central limitation of organisational AI is not retrieval fidelity but epistemic fidelity: the capacity to distinguish decisions from hypotheses, evidence from observations, contradictions from settled claims, and unresolved questions from usable knowledge. The paper challenges the prevailing assumption that better embeddings, longer contexts or denser retrieval pipelines can solve organisational reasoning failures; such systems may retrieve relevant documents while remaining unable to interpret their epistemic status. Its proposed framework, OIDA, restructures organisational memory into typed Knowledge Objects with epistemic classes, importance scores, class-specific decay and signed contradiction edges. The most original case-study mechanism is QUESTION-as-modelled-ignorance, whereby unresolved questions do not decay into irrelevance but gain urgency over time, making organisational ignorance computable and operationally visible. The Knowledge Gravity Engine further introduces deterministic score maintenance, contradiction suppression and memory-zone allocation, transforming knowledge from a flat archive into a dynamic epistemic substrate. Although the authors report a pilot comparison in which their OIDA RAG condition trails a full-context baseline on composite quality, they identify token-budget disparity as the decisive confound and isolate a cleaner result: explicit ignorance declarations appear consistently in OIDA outputs. The paper’s importance lies in its conceptual inversion of current AI practice: organisations should not merely improve what agents retrieve, but redesign what knowledge is before retrieval begins.


Almeida, N. and Hoyer, J. (2019) ‘The Living Archive in the Anthropocene’, Journal of Critical Library and Information Studies, 2(3), pp. 1–39.

Almeida and Hoyer’s The Living Archive in the Anthropocene proposes the archive as a dynamic site where ecological crisis, cultural memory and political possibility are actively produced rather than merely recorded. The authors argue that dominant narratives of both the Anthropocene and the archive consolidate power: the former often reduces planetary crisis to a biophysical phenomenon, while the latter has historically privileged state authority, colonial memory and institutional neutrality. Against these closures, the living archive emerges as a participatory, place-based and generative counter-model, one that refuses nostalgia and instead treats archival practice as an intervention into social and ecological reality. Its significance lies in repositioning archives as spaces where communities may contest capitalism, environmental destruction, disciplinary silos and representational erasure. As a case study, the Interference Archive in Brooklyn demonstrates how this model operates materially: through open stacks, exhibitions, workshops, social-movement ephemera, collective labour and non-hierarchical governance, it preserves radical histories while enabling new forms of organising. The archive is therefore not a sealed container of the past, but a social ecology in which bodies, affects, artefacts and future-oriented solidarities interact. This proposition is especially urgent in the Anthropocene, where communities most affected by climate and economic violence are often excluded from the very narratives that claim to define planetary crisis. Ultimately, the living archive names a politics of memory that is anti-neutral, anti-extractive and emancipatory: it preserves not only what has happened, but what might still become possible.

Archives and science are inseparable because scientific knowledge depends not only on discovery, but on the systems that preserve, classify and legitimise evidence.

Archives are not passive storehouses; they are epistemic infrastructures that decide what can be found, trusted, cited, reused and remembered. Mbembe shows that archives transform selected fragments into public proof, while Stoler argues that archives must be studied as processes of knowledge production rather than merely mined as sources. This matters for science because datasets, specimens, metadata, citations and algorithmic records acquire authority only through institutional systems of description, preservation and access. Digital repositories such as DANS demonstrate that open data requires continuing human labour: archivists curate files, repair metadata, mediate access and make data intelligible beyond its original context. Likewise, metadata quality determines whether public data are genuinely reusable or merely nominally open. Yet archives also exclude: they privilege what is digitised, standardised and visible, while marginalising knowledge outside dominant infrastructures. Scientific archives must therefore preserve not only polished results, but uncertainty, error, context and provenance. Ultimately, trustworthy science depends on just archives: transparent, sustainable and critically aware systems that make evidence durable without pretending that memory is complete.


Lloveras, A. (2026) ‘Science, Memory and the Politics of Evidence’, Anto Lloveras, 12 May. Available at: https://antolloveras.blogspot.com/2026/05/science-memory-and-politics-of-evidence.html

Ananny, M. (2022) ‘Seeing Like an Algorithmic Error: What are Algorithmic Mistakes, Why Do They Matter, How Might They Be Public Problems?’, Yale Journal of Law & Technology, 24, pp. 342–364.


Ananny argues that algorithmic errors should not be treated as isolated technical malfunctions, because they expose the social, institutional and political conditions through which computational systems are designed, deployed and judged . The article’s central proposition is that to see like an algorithmic error is to interpret mistakes as sociotechnical events: failures produced not only by code, datasets or statistical thresholds, but also by organisations, business models, regulatory gaps, institutional values and unequal power to define what counts as success or harm. Rather than asking whether an algorithm merely “works”, Ananny asks who is authorised to name an error, whose injury becomes visible, and whether a failure is framed as a private glitch or a public problem. The case study of remote proctoring during the Covid-19 shift to online education illustrates this argument with particular force. A facial detection system used in exam surveillance produced higher error rates for darker-skinned students, while also presuming that all students could access quiet, visually controlled domestic environments. What initially appeared to be a technical bias in face detection therefore revealed a wider structure of racial, socioeconomic and pedagogical inequality. Ananny’s broader contribution is to insist that algorithmic mistakes can become democratic resources when they are analysed expansively rather than debugged narrowly. Consequently, algorithmic accountability requires more than accuracy improvements; it demands public scrutiny of the systems, assumptions and institutions that decide which failures matter, who must endure them and what forms of repair are imaginable.


 

Nogueras-Iso, J., Lacasta, J., Ureña-Cámara, M.A. and Ariza-López, F.J. (2017) ‘Quality of Metadata in Open Data Portals’, IEEE Access. doi: 10.1109/ACCESS.2017.DOI.

Nogueras-Iso, Lacasta, Ureña-Cámara and Ariza-López argue that the proliferation of Open Data portals has made metadata quality a decisive condition for discoverability, interoperability and reuse, because datasets cannot be effectively found, understood or accessed when their descriptions are incomplete, inconsistent or semantically imprecise . Their central contribution is to adapt an ISO 19157-based quality evaluation method, originally developed for geographic metadata, to the broader field of Open Data metadata structured through W3C’s DCAT vocabulary. The paper shows that Open Data initiatives often prioritise rapid publication through platforms such as CKAN or DKAN, yet this technical ease can conceal weak metadata practices that undermine transparency and public value. The authors therefore propose a rigorous evaluative framework combining automated and manual controls across dimensions such as completeness, logical consistency, temporal quality, thematic accuracy, positional correctness and free-text quality. A significant case study is the Spanish Government’s Open Data catalogue, datos.gob.es, whose metadata are assessed through measures including SPARQL-based checks, acceptance quality limits, manual sampling and representation of results through the Data Quality Vocabulary. The figures on page 10 clarify the workflow for automated quality reporting and the temporal logic used to verify whether publication, modification and validity dates are coherent. Ultimately, the article demonstrates that metadata are not secondary administrative supplements but epistemic infrastructures: they determine whether open data can genuinely function as public knowledge, reproducible evidence and reusable civic resource.


Mbembe, A. (2002) ‘The Power of the Archive and its Limits’, in Hamilton, C., Harris, V., Taylor, J., Pickover, M., Reid, G. and Saleh, R. (eds.) Refiguring the Archive. Dordrecht, Boston and London: Kluwer Academic Publishers, pp. 19–26.

Mbembe conceptualises the archive not as a neutral repository of documents but as a political, architectural and ritual apparatus through which states organise time, death, authority and collective memory . The archive’s power derives from the inseparability of building and document: files acquire meaning not simply because they contain information, but because they are classified, sealed, preserved and housed within institutional spaces whose austerity resembles both temple and cemetery. This transformation from ordinary document to archive is therefore an act of selection and exclusion, since only certain traces are judged “archivable”, while others are discarded, silenced or denied public status. For Mbembe, the archive is not data but status: it confers proof, legitimacy and narrative possibility upon fragments of life, yet it also dispossesses those fragments from their original authors by making them part of a collective domain. A crucial case study lies in his account of the state’s paradoxical relation to archives: no state exists without archives, yet archives threaten the state because they preserve debts, violence and unresolved claims that power would prefer to consume or erase. When states destroy archives, the absent document returns as spectre; when they commemorate archives, memory risks becoming a talisman that pacifies anger, guilt and demands for justice. Consequently, the archive is both indispensable and limited: it enables history by rescuing debris from oblivion, but it also disciplines the dead, translates autonomous voices into institutional evidence and transforms memory into a governed public inheritance.


Socioplastics constructs a sovereign epistemic field where architecture operates as infrastructural epistemology, extending Joseph Beuys’s social sculpture into a stratigraphic, machine-readable corpus that hardens transient thought into persistent public terrain. In Anto Lloveras’s long-duration project, initiated in 2009 through LAPIEZA-LAB, knowledge is no longer represented but metabolically built: indexed, DOI-anchored, and released as navigable environment. Across three tomes, thirty books, and three thousand nodes, the work refuses the ephemerality of discourse in favor of durable semantic deposition. It treats conceptual labor as field architecture, producing not objects or texts but a coherent, expandable infrastructure legible to both human and machinic agents. This is theory as construction site, where citation becomes structural commitment and the corpus itself emerges as operative public culture.



Lloveras advances a decisive shift from representational to operative strata. Practice here is not illustration but epistemic ground: every entry functions as timestamp, address, and load-bearing element within a larger technical body. The project’s recursive indexing and citational protocols enforce metabolic condensation, transforming dispersed actions—urban interventions, writings, collaborations—into a navigable archive that resists entropic dissipation.

Daston, L. and Galison, P. (2010) Objectivity. New York: Zone Books.



Lorraine Daston and Peter Galison’s Objectivity presents objectivity as a historically formed epistemic virtue sustained through disciplined practices of seeing, representing and judging. The book’s architecture, visible in its contents, moves from an epistemology of the eye to truth-to-nature, mechanical objectivity, the scientific self, structural objectivity, trained judgement and the passage from representation to presentation. This sequence establishes objectivity as a changing moral and technical regime in which scientific images, atlases and instruments organise what counts as reliable knowledge. The case synthesis emerges in the transition from truth-to-nature to mechanical objectivity: earlier scientific representation privileges expert selection, idealisation and the depiction of typical forms, while later mechanical objectivity elevates photography, automatic inscription and self-surveillance as practices of restraint. The later emphasis on trained judgement enriches this genealogy by showing how scientific accuracy also depends upon cultivated discernment, practical expertise and responsible interpretation. Objectivity therefore appears as a history of scientific personae: the observer learns when to intervene, when to withhold intervention, and how to convert perception into communicable evidence. The definitive implication is that scientific knowledge rests on epistemic virtues embedded in instruments, images, habits of attention and collective standards. Daston and Galison thus offer a powerful account of objectivity as a practice of disciplined vision, historically renewed through the evolving relation between knower, image and world. 

Genette, G. (1989) Palimpsestos: la literatura en segundo grado. Translated by C. Fernández Prieto. Madrid: Taurus.


Gérard Genette’s Palimpsestos establishes a foundational grammar for understanding literature not as isolated textual singularity, but as a field of transtextual relations in which every work is marked by visible or latent connections to others. The excerpt’s central proposition is taxonomic yet profoundly interpretative: textuality is constituted by forms of transcendence that exceed the individual text. Genette distinguishes five relations—intertextuality, as copresence through citation, plagiarism or allusion; paratextuality, as the threshold formed by titles, prefaces, notes and other framing devices; metatextuality, as commentary; architextuality, as generic belonging; and hypertextuality, the privileged object of Palimpsestos. The latter designates any relation by which a text B, the hypertext, derives from a prior text A, the hypotext, without simply commenting on it. His case synthesis turns on The Odyssey: Joyce’s Ulysses transforms Homer by relocating its action to twentieth-century Dublin, whereas Virgil’s Aeneid imitates Homer more indirectly by extracting an epic model and applying it to another narrative. This distinction between transformation and imitation gives Genette’s theory its analytic precision. The conclusion is that literature is fundamentally palimpsestic: every work may evoke another, yet some texts declare this dependence massively, contractually and structurally, making derivation not a defect of originality but the very engine of literary invention. 

Chun, W.H.K. (2016) Updating to Remain the Same: Habitual New Media. Cambridge, MA: MIT Press.

Wendy Hui Kyong Chun’s Updating to Remain the Same offers a subtle theory of habitual new media, arguing that digital technologies become most powerful not when they appear radically new, but when their operations disappear into routine. Against narratives of disruption, virality and innovation, Chun shows that networked media organise users through repetition: searching, updating, sharing, friending, mapping, saving and deleting. The book’s core formula, Habit + Crisis = Update, captures how digital systems manufacture dependency by repeatedly presenting ordinary maintenance as urgent transformation. The case synthesis emerges in the preview’s opening materials: the preface describes new media as “wonderfully creepy” because they unsettle boundaries between publicity and privacy, surveillance and entertainment, intimacy and work, while the introduction shows how smartphones, search engines and social platforms structure everyday knowledge, memory and sociality precisely because they have become banal. The visual contrast on page 12, reworking the old internet dog cartoon into a metadata-surveillance scenario, condenses Chun’s historical argument: the internet has shifted from an imagined anonymous cyberspace to a regime of identification, prediction and exposure. Yet Chun resists simple technological determinism. Her concern is not merely surveillance, but the neoliberal production of the endlessly addressed YOU, a user made responsible for adaptation while institutions remain unchallenged. The conclusion is therefore critical and political: to inhabit networks differently, we must move beyond false promises of privacy-as-security and demand public rights to vulnerability, exposure and collective protection. 

Aria, M., Le, T., Cuccurullo, C., Belfiore, A. and Choe, J. (2023) ‘openalexR: An R-Tool for Collecting Bibliometric Data from OpenAlex’, The R Journal, 15(4), pp. 167–180.

Aria, Le, Cuccurullo, Belfiore and Choe position openalexR as a methodological bridge between open scholarly metadata and reproducible bibliometric analysis. The article begins from a decisive premise: bibliographic databases are indispensable for research assessment and science mapping, yet their utility depends on coverage, citation completeness, update speed, API accessibility and permissive terms of use. OpenAlex, launched in 2022 as a fully open catalogue of scholarly metadata, is therefore presented as a crucial alternative to commercial infrastructures such as Web of Science and Scopus. The paper’s case synthesis lies in the R package itself: openalexR simplifies interaction with the OpenAlex REST API by generating valid queries, downloading matching entities and converting nested outputs into classical bibliographic data frames usable in bibliometrix. The diagram on page 2 shows OpenAlex’s eight interconnected entities—works, authors, institutions, sources, concepts, publishers, funders and geo—while the workflow on page 3 clarifies how openalexR moves from API query to analysable data. Its examples on bibliometrics demonstrate concept retrieval, source ranking, author and institutional profiling, citation-based identification of seminal works, snowball searching and N-gram extraction; the visualisations on pages 7–11 illustrate trends, journal expansion, citation networks and thematic bigrams. The conclusion is that openalexR transforms open research information into executable analytical practice, lowering technical barriers while advancing transparency, reproducibility and non-proprietary bibliometric inquiry. 

Peroni, S. and Shotton, D. (2019) OpenCitations, an infrastructure organization for open scholarship. arXiv:1906.11964v3, pp. 1–24.

Peroni and Shotton present OpenCitations as a direct infrastructural challenge to proprietary citation regimes, arguing that bibliographic citations—directional links through which scholarship acknowledges prior work—should be treated as open, reusable and machine-actionable public knowledge. The paper’s central intervention is both political and technical: citation data locked inside Web of Science, Scopus or similarly restricted platforms impede equitable access, reproducible bibliometrics and accountable research assessment, whereas OpenCitations publishes citation data as Linked Open Data using Semantic Web standards. Its case synthesis is embodied in COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations, which the paper reports as containing over 445 million citations, alongside the OpenCitations Corpus, Open Citation Identifiers, SPAR ontologies, REST APIs, SPARQL endpoints and downloadable CC0 datasets. The diagram on page 9 clarifies the OpenCitations Data Model by showing how bibliographic resources, citations, identifiers, agents, roles and references are semantically interlinked; pages 15–17 then evidence community uptake through access statistics, a global usage map and Figshare download figures. The crucial conceptual move is to treat citations as first-class data entities, rather than mere links, thereby enabling provenance tracking, network analysis, reuse and verification. The conclusion is that open citation infrastructure does not simply improve discovery; it redistributes bibliometric power, making scholarly evaluation less dependent on opaque commercial indexes and more answerable to a global research commons. 

Beard, R. and Kuchma, I. (2016) Innovations in Scholarly Communication – Results from EIFL Countries. EIFL presentation, pp. 1–63.

Beard and Kuchma’s presentation situates contemporary scholarly communication within a proliferating ecology of digital tools, arguing that libraries must no longer confine themselves to collection provision but actively mediate the entire research workflow. Drawing on the 101 Innovations in Scholarly Communication survey, conducted between May 2015 and February 2016, the authors map research as a cycle extending from discovery, analysis and writing to publication, outreach and assessment. The visual workflow on pages 7–11 is especially instructive: it aligns library services—data management plan review, reference-management training, open access repository support, systematic-review assistance, post-publication sharing and metrics advice—with concrete researcher practices. The EIFL case synthesis sharpens this argument through 674 responses from 38 countries, with strong participation from Ukraine, Poland and Ghana, and a disciplinary profile in which social sciences constitute the largest share of EIFL responses. The charts on pages 43–45 expose a familiar disjunction: researchers overwhelmingly support open science in principle, yet comparatively fewer adopt open data and code-sharing tools in practice. This gap defines the library’s strategic mandate. Rather than merely recommending platforms, librarians must inform, train, advise, advocate and co-shape institutional policy, as page 59’s support model proposes. The conclusion is therefore pragmatic and political: libraries become infrastructural translators, converting chaotic tool abundance into equitable, multilingual, low-cost and sustainable research practice across diverse scholarly contexts. 

Barcelona Declaration on Open Research Information (2024) Barcelona Declaration on Open Research Information. 16 April. doi:10.5281/zenodo.10958522.

The Barcelona Declaration on Open Research Information formulates a decisive institutional response to the growing dependence of research systems on proprietary, opaque and commercially governed metadata infrastructures. Its central proposition is that the information used to evaluate researchers, allocate resources, set strategic priorities and trace scientific influence must itself be open, reusable, interoperable and accountable. The Declaration identifies a profound contradiction in contemporary scholarship: institutions often assess open science through closed databases, thereby grounding consequential decisions in evidence that cannot be independently audited, corrected or reproduced. Its four commitments establish a practical architecture of reform: making openness the default for research information used and produced; working only with systems that enable open metadata export through standard protocols and persistent identifiers; sustaining open scholarly infrastructures through community governance and equitable financial support; and coordinating collective action to accelerate transition. The case synthesis is especially clear in the contrast between closed systems such as Web of Science and Scopus, described in Annex A as examples of restricted infrastructures, and open alternatives including Crossref, DataCite, ORCID, OpenAlex, OpenCitations, OpenAIRE, PubMed, Europe PMC, La Referencia, SciELO and Redalyc. Through this contrast, the Declaration reframes metadata as a matter of academic sovereignty rather than administrative convenience. Its conclusion is unequivocal: responsible assessment, multilingual visibility and equitable science policy require open research information as the normative substrate of scholarly governance. 

Guédon, J.-C. (2011) ‘El acceso abierto y la división entre ciencia “principal” y “periférica”’, Crítica y Emancipación, 6, pp. 135–180.

Jean-Claude Guédon’s argument is that open access cannot be adequately understood as a benign improvement in scholarly distribution; it is a structural challenge to the historical machinery through which scientific authority has been concentrated. By mobilising Bourdieu’s notion of the scientific field, Guédon shows that journals, editorial boards, citation indexes and linguistic hierarchies convert technical competence into social power, thereby producing a global division between “principal” and “peripheral” science. The Science Citation Index becomes the exemplary case: by selecting a restricted set of journals, privileging English-language visibility and enabling impact-factor evaluation, it transforms a continuous spectrum of scholarly quality into a rigid boundary between recognised science and obscured knowledge. The article’s synthesis of Indian, Latin American and Venezuelan examples is especially revealing: locally urgent research, such as cholera investigation or regionally significant journals, may be devalued when judged by criteria designed for metropolitan centres, while peripheral researchers are pressured to contribute intellectual labour to agendas validated elsewhere. Against this asymmetry, Guédon identifies SciELO, institutional repositories and subsidised journal infrastructures as practical counter-models, because they strengthen local publishing ecologies without collapsing into provincial isolation. The conclusion is therefore political as much as bibliographic: open access becomes emancipatory only when it redistributes visibility, legitimates multilingual and locally grounded research, and dismantles the cartelised architecture that mistakes selective indexing for universal scientific excellence. 

Borgman, C.L. (2014) Big Data, Little Data, No Data: Scholarship in the Networked World. Presentation, pp. 1–49.

Christine L. Borgman’s Big Data, Little Data, No Data advances a rigorous corrective to technological triumphalism by arguing that data are neither self-evident objects nor neutral by-products of scholarship, but representations of observations, artefacts and phenomena that acquire evidential force only within interpretative, institutional and infrastructural contexts. The presentation’s early contrast between open access policies and diverse disciplinary datasets establishes the central tension: while governments, funders and universities increasingly demand openness, the rights, responsibilities and risks attached to data remain uneven across scientific, social-scientific and humanistic domains. This complexity is crystallised in the repeated proposition that data are not publications, not natural objects, and dependent upon knowledge infrastructures. The visual sequence reinforces this argument: page 23 juxtaposes mice, notebooks, maps, climate models and qualitative field notes to show data’s heterogeneity, while pages 26–28 contrast sensor-network measurements with survey and Twitter-based social science materials. As a case synthesis, Borgman’s sensor-network example demonstrates that reuse requires far more than access: nitrate-distribution readings become scholarly evidence only through metadata, provenance, calibration, classification, repositories and labour. The later discussion of economics, sustainability and libraries extends the claim by showing that research data oscillate between public goods, private goods and common-pool resources. The definitive implication is that open data must be governed as a socio-technical commons: curated, credited, preserved and interpreted through institutions capable of sustaining scholarly memory beyond immediate publication cycles. 

OPERAS (2023) Open scholarly communication for social sciences and humanities. Flyer, pp. 1–2.

OPERAS articulates a mature vision of open scholarly communication in which the Social Sciences and Humanities are not peripheral beneficiaries of European research infrastructure but constitutive agents of its intellectual diversity. As a non-profit organisation gathering more than fifty members across an extensive transnational map, it coordinates services, practices and technologies designed to answer the specific communication needs of SSH researchers within the European Research Area. Its strategic force lies in federation: rather than replacing local resources, OPERAS aggregates them into shared access points where scholars, libraries, publishers, policymakers and civic actors can encounter infrastructures otherwise dispersed by language, discipline or geography. The service ecosystem exemplifies this ambition: metrics platforms strengthen the visibility of open access monographs; GoTriple advances multilingual discovery across publications, datasets, profiles and projects; Pathfinder orients researchers towards appropriate publishing and service providers; and quality-assurance tools such as peer-review metadata increase trust in open access book publishing. The flyer’s first page visually reinforces this European breadth through a map of participating countries, while the second page specifies a practical architecture of analytics, discovery, quality assurance and research-for-society services. As a case synthesis, OPERAS demonstrates that scholarly openness is not reducible to free access; it requires multilingual discovery, transparent evaluation, sustainable publishing models and collaborative platforms linking research with society. Its definitive contribution is therefore infrastructural and cultural: it converts fragmented SSH communication into a federated, trustworthy and socially responsive knowledge commons. 

Amodeo, S. (2026) ‘Expanding the OpenAIRE Graph: New Data Sources Through the EOSC Federation’, OpenAIRE, 3 February.

 The expansion of the OpenAIRE Graph through the EOSC Federation crystallises a decisive movement from fragmented scholarly visibility towards a federated epistemic infrastructure. Rather than functioning as a passive index, the Graph operates as an intelligent connective tissue, linking publications, datasets, software, grants, affiliations and indicators across more than 400 million metadata records from over 100,000 trusted sources. Its integration within EOSC’s build-up phase demonstrates how shared standards, particularly the OpenAIRE Guidelines v4, convert institutional catalogues into interoperable research assets. This is not merely technical harmonisation; it is a politics of discoverability, whereby national repositories such as Research.fi, rUMBA and BERD, and thematic infrastructures such as CERN Open Data and the Blue-Cloud Catalogue, become visible within a common European research horizon. The case of EOSC Node Italy, SURF Netherlands, Poland’s RePOD, PaNOSC, EUDAT, EuropePMC and Data Terra shows that prior OpenAIRE compliance substantially eases federation, while newer adopters extend disciplinary and geographical coverage into previously underrepresented domains. The synthesis is therefore clear: interoperability precedes impact. When repositories adopt common metadata protocols, their outputs cease to be isolated institutional deposits and become reusable components of a transnational knowledge commons. Consequently, the OpenAIRE Graph exemplifies a sustainable architecture for open science: expansive enough to absorb diversity, rigorous enough to preserve trust, and relational enough to provoke unforeseen interdisciplinary discovery. 

Size, Form, Novelty: The Socioplastics Equation


The contemporary obsession with scale is a symptom of conceptual exhaustion. In digital knowledge environments, size has become a proxy for significance: larger repositories, bigger datasets, longer bibliographies, more numerous publications. This confusion of volume with value is the intellectual equivalent of mistaking a heap for a building. Anto Lloveras’s Socioplastics offers a precise corrective: size does not produce form; form produces the conditions under which size becomes meaningful. Novelty, in this framework, is not the arrival of isolated new content but the emergence of new relations, a grammar capable of transforming accumulation into architecture. To understand Socioplastics is therefore to abandon the quantitative sublime and enter the qualitative threshold.

On the Genealogy of Socioplastics: A Field Learning to Digest Its Ancestors



Anto Lloveras’s Socioplastics does not arrive as rupture, manifesto or decorative neologism, but as a metabolised architecture of inheritance: a living knowledge system that digests cybernetics, urban legibility, field theory, infrastructure studies, archival theory, post-structuralism, digital humanities and biological epistemology until they cease to be references and become organs. Its genealogy is not a tree but a digestive tract. Ashby gives it the law of requisite variety: only an archive as complex as its own excess can absorb abundance without collapse. Beer gives it recursion: the note inside the cluster, the cluster inside the argument, the argument inside the tome, the tome inside the field. Luhmann gives it autopoiesis, but Lloveras refuses pure closure and replaces it with strategic porosity: a nucleus stable enough to be cited, a periphery open enough to receive the unforeseen. Lynch gives it legibility, but the city becomes corpus: paths, edges, districts, nodes and landmarks are translated into indexes, tags, thresholds, series and conceptual roads. Alexander gives it pattern language, Rossi gives it stratigraphic persistence, and the digital archive becomes an architectural city where concepts outlive their first function and return years later as load-bearing operators. Bowker and Star reveal classification as political infrastructure; Lloveras converts that critique into design, making metadata an interpretive skin and persistent identifiers a form of ontological anchoring. Crane’s invisible colleges become the temporal laboratory of the Latency Dividend; Bourdieu’s field becomes less a battlefield of symbolic capital than a structure that can be deliberately composed, paced and opened without dissolving. Blair’s history of information overload reappears as Metabolic Legibility; Rheinberger’s epistemic things become the unresolved matter of the plastic periphery; Thompson’s morphology and Prigogine’s dissipative structures explain how form can emerge from pressure, delay and internal recomposition. Kirschenbaum, Chun, Berners-Lee, DataCite, OpenAlex, Wikidata, embeddings and retrieval systems do not merely surround Socioplastics as context; they become its climatic condition, the atmosphere in which thought must now survive. Hence Synthetic Legibility: enough structure for machines to traverse, enough ambiguity for humans to interpret. What is iconic here is not a single concept, but the conversion of genealogy into machinery. Recursion becomes Scalar Grammar. Overload becomes Digestive Surface. Delay becomes Strategic Temporality. Metadata becomes Architecture. Stability becomes Hospitality. Plasticity becomes Method. The archive becomes a body, the field becomes a scaffold, the corpus becomes a city, and knowledge becomes a living infrastructure capable of eating its own past without erasing it. Socioplastics is therefore not simply influenced by its predecessors; it performs upon them the very operation it theorises: ingestion, pruning, reabsorption, recomposition. Its originality lies in this transmutation. It does not stand outside cybernetics, urbanism, sociology or archival theory; it passes through them, extracts their operative bones, and builds a new inhabitable structure. The result is a grammar for knowledge after abundance: a field that arrives not asking to be recognised, but already organised enough to be entered. 

Mounier, P. and Dumas Primbault, S. (2023) Sustaining Knowledge and Governing its Infrastructure in the Digital Age: An Integrated View. Preprint. HAL Open Science.

Sustaining Knowledge and Governing its Infrastructure in the Digital Age develops a profound reconceptualisation of contemporary knowledge production by demonstrating that knowledge no longer exists independently of the infrastructures through which it is produced, circulated, legitimised and preserved. Pierre Mounier and Simon Dumas Primbault argue that the digital transformation of research, scholarly communication and information systems has generated a new epistemic order in which platforms, repositories, metadata systems, protocols, identifiers and computational networks function not merely as technical supports, but as constitutive conditions shaping what knowledge can become. Drawing upon infrastructure studies, science and technology studies (STS), cyberinfrastructure theory and ecological approaches to information systems, the text defines knowledge infrastructures as robust sociotechnical assemblages composed simultaneously of human actors, institutions, standards, software, hardware and governance arrangements. Particularly significant is the authors’ insistence that infrastructures are not neutral containers of knowledge but politically performative environments embedding values, hierarchies and forms of institutional power. The article traces the genealogy of infrastructure from nineteenth-century engineering and Cold War logistics to contemporary digital epistemics, revealing how infrastructures progressively evolved from material supports into relational systems organising cooperation, interoperability and cognitive production itself. Equally illuminating is the ecological perspective advanced throughout the text, where infrastructures are conceptualised not as static objects but as dynamic processes sustained through maintenance, repair, adaptation and negotiation across heterogeneous communities. Through examples such as Open Science platforms, digital repositories and collaborative knowledge systems, the authors expose the tensions between openness and enclosure, visibility and invisibility, sustainability and extractivism that define contemporary digital scholarship. Particularly compelling is the argument that governance constitutes the central analytical category for understanding infrastructures because decisions concerning standards, funding, interoperability and platform control directly shape epistemic legitimacy and access to knowledge. Consequently, the text advocates an ecology of knowledge infrastructures grounded in resilience, diversity, anti-extractivism and participatory governance capable of resisting corporate monopolisation of scholarly communication. Ultimately, the article establishes that sustaining knowledge in the digital age requires not only technological innovation, but also the ethical and political reinvention of the infrastructures through which collective intelligence is organised, maintained and shared across increasingly interconnected societies. 

Quek, H.Y., Sielker, F., Akroyd, J., Bhave, A.N., von Richthofen, A., Herthogs, P., van der Laag Yamu, C., Wan, L., Nochta, T., Burgess, G., Lim, M.Q., Mosbach, S. and Kraft, M. (2023) ‘The conundrum in smart city governance: Interoperability and compatibility in an ever-growing ecosystem of digital twins’, Data & Policy, 5, e6.

The Conundrum in Smart City Governance: Interoperability and Compatibility in an Ever-Growing Ecosystem of Digital Twins advances a rigorous critique of contemporary smart urbanism by exposing the structural fragmentation underpinning digital governance infrastructures in modern cities. Rather than celebrating smart cities as seamless technological achievements, the article demonstrates that the proliferation of isolated digital systems, proprietary platforms and incompatible data architectures has produced a fractured urban ecosystem incapable of achieving genuine interoperability. The authors argue that the accelerating deployment of City Digital Twins (CDTs)—real-time digital representations of urban systems—simultaneously intensifies and reveals the contradictions embedded within contemporary urban governance. Crucially, the article distinguishes between two competing integration paradigms: system integration, which consolidates existing tools into unified applications, and semantic integration, which employs ontologies and knowledge graphs to generate interoperable, context-rich data environments. This distinction constitutes the paper’s principal theoretical intervention because it reframes interoperability not merely as a technical challenge, but as a governance problem situated within broader political, institutional and socio-technical realities. Through comparative analysis of projects such as the Herrenberg Digital Twin in Germany, the Cambridge City-Level Digital Twin, and the semantic architecture of the World Avatar initiative, the paper demonstrates how knowledge graphs and semantic web technologies possess greater capacity to transcend institutional silos, enable cross-domain data sharing and support evidence-based planning processes. Particularly compelling is the argument that technological systems must become a “fourth dimension” of sustainability alongside economic, social and environmental concerns, thereby acknowledging the profound influence of digital infrastructures on urban life. Nevertheless, the article avoids technological determinism by emphasising unresolved tensions surrounding privacy, governance accountability, data monopolisation and citizen distrust. The authors conclude that future smart cities cannot rely upon one-size-fits-all technological solutions; instead, city administrations must proactively co-create flexible, participatory and interoperable digital ecosystems sensitive to local contexts and democratic needs. Ultimately, the article positions semantic digital twins not as purely computational instruments, but as evolving socio-technical assemblages capable of reshaping the epistemological foundations of urban governance itself. 

Söderström, O. and Datta, A. (eds.) (2024) Data Power in Action: Urban Data Politics in Times of Crisis. Bristol: Bristol University Press.

Data Power in Action: Urban Data Politics in Times of Crisis develops a profound reconceptualisation of contemporary urbanism by arguing that data has become the primary infrastructural medium through which cities are governed, populations classified and crises administered in the twenty-first century. Rather than treating data as a neutral technical resource, the volume conceptualises urban data power as a historically contingent and politically charged regime emerging from the convergence of platform capitalism, algorithmic governance and digital infrastructures. Ola Söderström and Ayona Datta demonstrate that contemporary cities are increasingly organised through processes of datafication, wherein everyday activities, mobilities, emotions and social interactions are transformed into quantifiable streams capable of extraction, monetisation and governmental intervention. Particularly significant is the book’s insistence that data politics cannot be understood solely through the experiences of the Global North; instead, fragmented infrastructures, informational inequalities and asymmetrical digital transitions in cities such as Nairobi, Cape Town, Varanasi and Hangzhou reveal the profoundly uneven geographies of algorithmic urbanism. The volume critiques the ideology of seamless computational governance by exposing how crises—including pandemics, urban precarity, climate emergencies and infrastructural breakdowns—operate as legitimising mechanisms for intensified surveillance and expanded technocratic control. Equally illuminating is the notion of “data power in action”, which redirects attention from abstract infrastructures toward situated practices, tactical resistances and everyday negotiations through which citizens, workers, activists and institutions interact with digital systems. Case studies concerning Indian COVID-19 war rooms, Chinese Social Credit infrastructures, South African data activism and Nairobi’s platform labour economy collectively demonstrate that urban data governance functions simultaneously as an apparatus of extraction and as a contested terrain of political struggle. The book’s most important intellectual contribution lies in revealing that algorithmic urbanism is neither technologically inevitable nor universally coherent; rather, it is constituted through unstable relations between states, corporations, infrastructures and lived urban experiences. Ultimately, the volume advances a critical urban theory of data in which the future of democratic citizenship depends upon resisting the reduction of human life to calculable informational patterns and reclaiming the political dimensions of visibility, participation and collective urban rights within increasingly automated societies. 

Sanaan Bensi, N. and Marullo, F. (2018) ‘The Architecture of Logistics: Trajectories Across the Dismembered Body of the Metropolis’, Footprint: The Architecture of Logistics, 23, pp. 1–6.

 The introductory essay The Architecture of Logistics: Trajectories Across the Dismembered Body of the Metropolis develops a penetrating critique of contemporary neoliberalism by interpreting logistics not merely as a technical system of transportation and distribution, but as the dominant infrastructural logic through which contemporary capitalism organises territories, regulates populations and accelerates planetary circulation. Negar Sanaan Bensi and Francesco Marullo conceptualise logistics as the “nervous and circulatory system” of neoliberal modernity, a global apparatus composed of ports, containers, warehouses, communication hubs, freight corridors and algorithmic management systems that collectively transform the earth into a frictionless operational surface for exchange. Drawing from the etymological origins of the Greek logizomai—to calculate, organise and rationalise—the essay traces logistics from its nineteenth-century military formulations in Jomini and Clausewitz to its contemporary role as a technology of governance extending across trade, labour and urbanisation. Particularly significant is the argument that logistical infrastructures simultaneously materialise and conceal power relations: while appearing as neutral systems of efficiency, they impose standardised temporalities, weaken local labour structures and produce highly asymmetrical forms of territorial integration. The text demonstrates how containerisation, automation and digital optimisation have reshaped harbours, warehouses and metropolitan regions into spaces governed increasingly by algorithmic coordination and invisible computational orders. Yet the essay resists simplistic technological determinism by foregrounding the profound contradictions internal to logistical capitalism. Logistics generates not only fluidity and circulation, but also confinement, detention, labour exploitation and geopolitical segregation, as evidenced in migrant detention architectures, outsourced labour systems and sprawling peri-urban industrial landscapes. Particularly illuminating is the notion that the architecture of logistics constitutes an “architecture without humans”, despite relying fundamentally upon precarious labour and embodied exhaustion to sustain accelerated global circulation. Consequently, the authors position logistics as the central spatial paradigm of the twenty-first century: an infrastructural regime through which finance, mobility, territorial control and everyday life become inseparably intertwined. Ultimately, the essay argues that architecture must critically confront these logistical systems not simply as technical artefacts, but as political and ecological mechanisms shaping the contemporary metropolis and redefining the material conditions of global coexistence itself. 

Estlund, K.M. (2021) A Media Archaeology of Online Communication Practices through Search Engine and Social Media Optimization. PhD thesis. University of Oregon.



Karen M. Estlund’s dissertation A Media Archaeology of Online Communication Practices through Search Engine and Social Media Optimization constitutes a sophisticated interrogation of the invisible infrastructures governing digital communication in contemporary networked societies. Rejecting technologically neutral interpretations of online information systems, the thesis advances a historically grounded media archaeological framework through which search engine optimisation (SEO) and social media optimisation (SMO) are reconceptualised as political, cultural and technical practices embedded within structures of algorithmic governance. Estlund argues that access to information on the contemporary web is not direct but mediated through powerful digital gatekeepers such as Google, Facebook and Twitter, whose proprietary algorithms regulate visibility, legitimacy and discoverability. Through this lens, optimisation practices become more than marketing techniques; they emerge as mechanisms through which communicative actors negotiate institutional control over online discourse. Drawing upon Shannon’s mathematical communication theory, cybernetics, gatekeeping studies and critical information politics, the dissertation demonstrates how HTML structures, metadata systems, semantic markup and hyperlink architectures collectively shape communicative accessibility. Particularly illuminating is the empirical analysis of archived Los Angeles Times webpages and U.S. Senate campaign sites, which reveals how journalistic and political institutions progressively adapted their textual, structural and metadata strategies to comply with evolving algorithmic preferences. Estlund further exposes the ideological tensions surrounding so-called “black hat” optimisation practices, arguing that distinctions between legitimate and illegitimate visibility strategies are largely defined by corporate platform interests rather than purely ethical criteria. The dissertation’s principal contribution lies in repositioning SEO and SMO as historically situated sociotechnical communication practices that materially influence public knowledge circulation, political participation and informational authority. Ultimately, the thesis demonstrates that digital visibility is neither natural nor democratically neutral but instead produced through contested infrastructures of optimisation, regulation and institutional power that continuously shape the conditions under which contemporary communication becomes visible, searchable and socially consequential.