Dynamics Movement System

A field that does not move stagnates. The DynamicsMovementSystem names the kinetic infrastructure through which a corpus maintains its internal motion: not the content of its concepts, but the energy that drives them into new configurations. In physics, dynamics is the study of forces and motion. In Socioplastics, it is the study of conceptual forces: the pushes and pulls that drive concepts into new combinations, new scales, new applications. The DynamicsMovementSystem identifies the forces at work in the corpus: the gravitational pull of heavily cited concepts, the centrifugal force of scalar expansion, the friction of disciplinary boundaries, the inertia of established formulations. These forces are not metaphors. They are structural operators. A heavily cited concept exerts gravitational pull on adjacent nodes, drawing them into its orbit. A scalar expansion generates centrifugal force, pushing concepts toward new magnifications. A disciplinary boundary generates friction, slowing cross-field movement. An established formulation generates inertia, resisting transformation. The DynamicsMovementSystem makes these forces explicit. It allows practitioners to calculate the energy required to move a concept from one configuration to another. Node 1509 places this concept in Core III because dynamics is one of the seven integrated disciplines. But the system is not about physics. It is about the physics of conceptual motion. Without this concept, the field is static. With it, the field is kinetic.

 

Morphogenesis Growth Model


A field grows, but not like a crystal. It grows like an organism. The MorphogenesisGrowthModel names the developmental logic through which a corpus expands: not by accretion of identical units, but by differentiation of structural forms in response to internal and external pressures. In biology, morphogenesis is the process by which an organism acquires its form. In epistemology, it is the process by which a field acquires its shape. The Socioplastics corpus did not begin as 3,000 nodes. It began as a single concept — the socioplastic itself — and differentiated over seventeen years into the current architecture. The MorphogenesisGrowthModel makes this process explicit. It identifies the stages of field development: the initial concept, the first differentiation into thematic clusters, the emergence of scalar operations, the hardening of structural elements, the integration of disciplinary fields, and the executive mode that governs the mature corpus. Each stage is not merely chronological. It is morphological. The field's form at each stage is determined by the interactions between its existing structure and the pressures acting upon it. Node 1508 places this concept in Core III because morphogenesis is one of the seven integrated disciplines. But the model is not about biological development. It is about the developmental logic of epistemic infrastructure. The field is the organism. Its growth is the subject. Without this concept, expansion is understood as addition. With it, expansion is understood as differentiation.


L’Internationale Online (2016) Decolonising Archives. L’Internationale Books.

 Decolonising Archives presents the archive not as a neutral storehouse of cultural memory, but as a contested epistemic infrastructure through which power is organised, legitimised and resisted. The publication argues that coloniality survives beyond formal colonial rule through archival ownership, classification, access regimes, digitisation policies and market-driven extraction. Its central proposition is therefore double: archives must be protected from commodification as cultural capital, yet also radically rethought as sites where Western classificatory systems are exposed as instruments of imperial domination. The volume’s visual materials reinforce this argument: the cover image of Guinean women from unfinished revolutionary film footage, and the archival maps and visual-resistance materials reproduced in the Red Conceptualismos del Sur essay, show that archives are not inert remnants but unfinished political struggles. As a case study, RedCSur’s work with Latin American conceptualist and resistance archives demonstrates a decolonial practice grounded in preservation, collectivisation and activation rather than mere visibility; its Archives in Use platform seeks to return precarious artistic-political materials to public circulation without surrendering them to state violence or market neutralisation. Fraser and Todd’s essay on Indigenous research in Canada deepens the critique by insisting that decolonising state archives can only ever be partial, since such institutions remain bound to settler-colonial nation-making. Ultimately, the collection concludes that decolonial archival work is not simply a matter of digitising documents, but of transforming the conditions under which memory becomes knowledge, evidence, commons and political action.

Bottino, F., Ferrero, C., Dosio, N. and Beneventano, P. (2026) Retrieval Is Not Enough: Why Organizational AI Needs Epistemic Infrastructure. arXiv:2604.11759v1.

 Bottino, Ferrero, Dosio and Beneventano’s Retrieval Is Not Enough argues that the central limitation of organisational AI is not retrieval fidelity but epistemic fidelity: the capacity to distinguish decisions from hypotheses, evidence from observations, contradictions from settled claims, and unresolved questions from usable knowledge. The paper challenges the prevailing assumption that better embeddings, longer contexts or denser retrieval pipelines can solve organisational reasoning failures; such systems may retrieve relevant documents while remaining unable to interpret their epistemic status. Its proposed framework, OIDA, restructures organisational memory into typed Knowledge Objects with epistemic classes, importance scores, class-specific decay and signed contradiction edges. The most original case-study mechanism is QUESTION-as-modelled-ignorance, whereby unresolved questions do not decay into irrelevance but gain urgency over time, making organisational ignorance computable and operationally visible. The Knowledge Gravity Engine further introduces deterministic score maintenance, contradiction suppression and memory-zone allocation, transforming knowledge from a flat archive into a dynamic epistemic substrate. Although the authors report a pilot comparison in which their OIDA RAG condition trails a full-context baseline on composite quality, they identify token-budget disparity as the decisive confound and isolate a cleaner result: explicit ignorance declarations appear consistently in OIDA outputs. The paper’s importance lies in its conceptual inversion of current AI practice: organisations should not merely improve what agents retrieve, but redesign what knowledge is before retrieval begins.


Almeida, N. and Hoyer, J. (2019) ‘The Living Archive in the Anthropocene’, Journal of Critical Library and Information Studies, 2(3), pp. 1–39.

Almeida and Hoyer’s The Living Archive in the Anthropocene proposes the archive as a dynamic site where ecological crisis, cultural memory and political possibility are actively produced rather than merely recorded. The authors argue that dominant narratives of both the Anthropocene and the archive consolidate power: the former often reduces planetary crisis to a biophysical phenomenon, while the latter has historically privileged state authority, colonial memory and institutional neutrality. Against these closures, the living archive emerges as a participatory, place-based and generative counter-model, one that refuses nostalgia and instead treats archival practice as an intervention into social and ecological reality. Its significance lies in repositioning archives as spaces where communities may contest capitalism, environmental destruction, disciplinary silos and representational erasure. As a case study, the Interference Archive in Brooklyn demonstrates how this model operates materially: through open stacks, exhibitions, workshops, social-movement ephemera, collective labour and non-hierarchical governance, it preserves radical histories while enabling new forms of organising. The archive is therefore not a sealed container of the past, but a social ecology in which bodies, affects, artefacts and future-oriented solidarities interact. This proposition is especially urgent in the Anthropocene, where communities most affected by climate and economic violence are often excluded from the very narratives that claim to define planetary crisis. Ultimately, the living archive names a politics of memory that is anti-neutral, anti-extractive and emancipatory: it preserves not only what has happened, but what might still become possible.

Archives and science are inseparable because scientific knowledge depends not only on discovery, but on the systems that preserve, classify and legitimise evidence.

Archives are not passive storehouses; they are epistemic infrastructures that decide what can be found, trusted, cited, reused and remembered. Mbembe shows that archives transform selected fragments into public proof, while Stoler argues that archives must be studied as processes of knowledge production rather than merely mined as sources. This matters for science because datasets, specimens, metadata, citations and algorithmic records acquire authority only through institutional systems of description, preservation and access. Digital repositories such as DANS demonstrate that open data requires continuing human labour: archivists curate files, repair metadata, mediate access and make data intelligible beyond its original context. Likewise, metadata quality determines whether public data are genuinely reusable or merely nominally open. Yet archives also exclude: they privilege what is digitised, standardised and visible, while marginalising knowledge outside dominant infrastructures. Scientific archives must therefore preserve not only polished results, but uncertainty, error, context and provenance. Ultimately, trustworthy science depends on just archives: transparent, sustainable and critically aware systems that make evidence durable without pretending that memory is complete.


Lloveras, A. (2026) ‘Science, Memory and the Politics of Evidence’, Anto Lloveras, 12 May. Available at: https://antolloveras.blogspot.com/2026/05/science-memory-and-politics-of-evidence.html

Ananny, M. (2022) ‘Seeing Like an Algorithmic Error: What are Algorithmic Mistakes, Why Do They Matter, How Might They Be Public Problems?’, Yale Journal of Law & Technology, 24, pp. 342–364.


Ananny argues that algorithmic errors should not be treated as isolated technical malfunctions, because they expose the social, institutional and political conditions through which computational systems are designed, deployed and judged . The article’s central proposition is that to see like an algorithmic error is to interpret mistakes as sociotechnical events: failures produced not only by code, datasets or statistical thresholds, but also by organisations, business models, regulatory gaps, institutional values and unequal power to define what counts as success or harm. Rather than asking whether an algorithm merely “works”, Ananny asks who is authorised to name an error, whose injury becomes visible, and whether a failure is framed as a private glitch or a public problem. The case study of remote proctoring during the Covid-19 shift to online education illustrates this argument with particular force. A facial detection system used in exam surveillance produced higher error rates for darker-skinned students, while also presuming that all students could access quiet, visually controlled domestic environments. What initially appeared to be a technical bias in face detection therefore revealed a wider structure of racial, socioeconomic and pedagogical inequality. Ananny’s broader contribution is to insist that algorithmic mistakes can become democratic resources when they are analysed expansively rather than debugged narrowly. Consequently, algorithmic accountability requires more than accuracy improvements; it demands public scrutiny of the systems, assumptions and institutions that decide which failures matter, who must endure them and what forms of repair are imaginable.


 

Nogueras-Iso, J., Lacasta, J., Ureña-Cámara, M.A. and Ariza-López, F.J. (2017) ‘Quality of Metadata in Open Data Portals’, IEEE Access. doi: 10.1109/ACCESS.2017.DOI.

Nogueras-Iso, Lacasta, Ureña-Cámara and Ariza-López argue that the proliferation of Open Data portals has made metadata quality a decisive condition for discoverability, interoperability and reuse, because datasets cannot be effectively found, understood or accessed when their descriptions are incomplete, inconsistent or semantically imprecise . Their central contribution is to adapt an ISO 19157-based quality evaluation method, originally developed for geographic metadata, to the broader field of Open Data metadata structured through W3C’s DCAT vocabulary. The paper shows that Open Data initiatives often prioritise rapid publication through platforms such as CKAN or DKAN, yet this technical ease can conceal weak metadata practices that undermine transparency and public value. The authors therefore propose a rigorous evaluative framework combining automated and manual controls across dimensions such as completeness, logical consistency, temporal quality, thematic accuracy, positional correctness and free-text quality. A significant case study is the Spanish Government’s Open Data catalogue, datos.gob.es, whose metadata are assessed through measures including SPARQL-based checks, acceptance quality limits, manual sampling and representation of results through the Data Quality Vocabulary. The figures on page 10 clarify the workflow for automated quality reporting and the temporal logic used to verify whether publication, modification and validity dates are coherent. Ultimately, the article demonstrates that metadata are not secondary administrative supplements but epistemic infrastructures: they determine whether open data can genuinely function as public knowledge, reproducible evidence and reusable civic resource.


Mbembe, A. (2002) ‘The Power of the Archive and its Limits’, in Hamilton, C., Harris, V., Taylor, J., Pickover, M., Reid, G. and Saleh, R. (eds.) Refiguring the Archive. Dordrecht, Boston and London: Kluwer Academic Publishers, pp. 19–26.

Mbembe conceptualises the archive not as a neutral repository of documents but as a political, architectural and ritual apparatus through which states organise time, death, authority and collective memory . The archive’s power derives from the inseparability of building and document: files acquire meaning not simply because they contain information, but because they are classified, sealed, preserved and housed within institutional spaces whose austerity resembles both temple and cemetery. This transformation from ordinary document to archive is therefore an act of selection and exclusion, since only certain traces are judged “archivable”, while others are discarded, silenced or denied public status. For Mbembe, the archive is not data but status: it confers proof, legitimacy and narrative possibility upon fragments of life, yet it also dispossesses those fragments from their original authors by making them part of a collective domain. A crucial case study lies in his account of the state’s paradoxical relation to archives: no state exists without archives, yet archives threaten the state because they preserve debts, violence and unresolved claims that power would prefer to consume or erase. When states destroy archives, the absent document returns as spectre; when they commemorate archives, memory risks becoming a talisman that pacifies anger, guilt and demands for justice. Consequently, the archive is both indispensable and limited: it enables history by rescuing debris from oblivion, but it also disciplines the dead, translates autonomous voices into institutional evidence and transforms memory into a governed public inheritance.


Socioplastics constructs a sovereign epistemic field where architecture operates as infrastructural epistemology, extending Joseph Beuys’s social sculpture into a stratigraphic, machine-readable corpus that hardens transient thought into persistent public terrain. In Anto Lloveras’s long-duration project, initiated in 2009 through LAPIEZA-LAB, knowledge is no longer represented but metabolically built: indexed, DOI-anchored, and released as navigable environment. Across three tomes, thirty books, and three thousand nodes, the work refuses the ephemerality of discourse in favor of durable semantic deposition. It treats conceptual labor as field architecture, producing not objects or texts but a coherent, expandable infrastructure legible to both human and machinic agents. This is theory as construction site, where citation becomes structural commitment and the corpus itself emerges as operative public culture.



Lloveras advances a decisive shift from representational to operative strata. Practice here is not illustration but epistemic ground: every entry functions as timestamp, address, and load-bearing element within a larger technical body. The project’s recursive indexing and citational protocols enforce metabolic condensation, transforming dispersed actions—urban interventions, writings, collaborations—into a navigable archive that resists entropic dissipation.

Daston, L. and Galison, P. (2010) Objectivity. New York: Zone Books.



Lorraine Daston and Peter Galison’s Objectivity presents objectivity as a historically formed epistemic virtue sustained through disciplined practices of seeing, representing and judging. The book’s architecture, visible in its contents, moves from an epistemology of the eye to truth-to-nature, mechanical objectivity, the scientific self, structural objectivity, trained judgement and the passage from representation to presentation. This sequence establishes objectivity as a changing moral and technical regime in which scientific images, atlases and instruments organise what counts as reliable knowledge. The case synthesis emerges in the transition from truth-to-nature to mechanical objectivity: earlier scientific representation privileges expert selection, idealisation and the depiction of typical forms, while later mechanical objectivity elevates photography, automatic inscription and self-surveillance as practices of restraint. The later emphasis on trained judgement enriches this genealogy by showing how scientific accuracy also depends upon cultivated discernment, practical expertise and responsible interpretation. Objectivity therefore appears as a history of scientific personae: the observer learns when to intervene, when to withhold intervention, and how to convert perception into communicable evidence. The definitive implication is that scientific knowledge rests on epistemic virtues embedded in instruments, images, habits of attention and collective standards. Daston and Galison thus offer a powerful account of objectivity as a practice of disciplined vision, historically renewed through the evolving relation between knower, image and world. 

Genette, G. (1989) Palimpsestos: la literatura en segundo grado. Translated by C. Fernández Prieto. Madrid: Taurus.


Gérard Genette’s Palimpsestos establishes a foundational grammar for understanding literature not as isolated textual singularity, but as a field of transtextual relations in which every work is marked by visible or latent connections to others. The excerpt’s central proposition is taxonomic yet profoundly interpretative: textuality is constituted by forms of transcendence that exceed the individual text. Genette distinguishes five relations—intertextuality, as copresence through citation, plagiarism or allusion; paratextuality, as the threshold formed by titles, prefaces, notes and other framing devices; metatextuality, as commentary; architextuality, as generic belonging; and hypertextuality, the privileged object of Palimpsestos. The latter designates any relation by which a text B, the hypertext, derives from a prior text A, the hypotext, without simply commenting on it. His case synthesis turns on The Odyssey: Joyce’s Ulysses transforms Homer by relocating its action to twentieth-century Dublin, whereas Virgil’s Aeneid imitates Homer more indirectly by extracting an epic model and applying it to another narrative. This distinction between transformation and imitation gives Genette’s theory its analytic precision. The conclusion is that literature is fundamentally palimpsestic: every work may evoke another, yet some texts declare this dependence massively, contractually and structurally, making derivation not a defect of originality but the very engine of literary invention. 

Chun, W.H.K. (2016) Updating to Remain the Same: Habitual New Media. Cambridge, MA: MIT Press.

Wendy Hui Kyong Chun’s Updating to Remain the Same offers a subtle theory of habitual new media, arguing that digital technologies become most powerful not when they appear radically new, but when their operations disappear into routine. Against narratives of disruption, virality and innovation, Chun shows that networked media organise users through repetition: searching, updating, sharing, friending, mapping, saving and deleting. The book’s core formula, Habit + Crisis = Update, captures how digital systems manufacture dependency by repeatedly presenting ordinary maintenance as urgent transformation. The case synthesis emerges in the preview’s opening materials: the preface describes new media as “wonderfully creepy” because they unsettle boundaries between publicity and privacy, surveillance and entertainment, intimacy and work, while the introduction shows how smartphones, search engines and social platforms structure everyday knowledge, memory and sociality precisely because they have become banal. The visual contrast on page 12, reworking the old internet dog cartoon into a metadata-surveillance scenario, condenses Chun’s historical argument: the internet has shifted from an imagined anonymous cyberspace to a regime of identification, prediction and exposure. Yet Chun resists simple technological determinism. Her concern is not merely surveillance, but the neoliberal production of the endlessly addressed YOU, a user made responsible for adaptation while institutions remain unchallenged. The conclusion is therefore critical and political: to inhabit networks differently, we must move beyond false promises of privacy-as-security and demand public rights to vulnerability, exposure and collective protection. 

Aria, M., Le, T., Cuccurullo, C., Belfiore, A. and Choe, J. (2023) ‘openalexR: An R-Tool for Collecting Bibliometric Data from OpenAlex’, The R Journal, 15(4), pp. 167–180.

Aria, Le, Cuccurullo, Belfiore and Choe position openalexR as a methodological bridge between open scholarly metadata and reproducible bibliometric analysis. The article begins from a decisive premise: bibliographic databases are indispensable for research assessment and science mapping, yet their utility depends on coverage, citation completeness, update speed, API accessibility and permissive terms of use. OpenAlex, launched in 2022 as a fully open catalogue of scholarly metadata, is therefore presented as a crucial alternative to commercial infrastructures such as Web of Science and Scopus. The paper’s case synthesis lies in the R package itself: openalexR simplifies interaction with the OpenAlex REST API by generating valid queries, downloading matching entities and converting nested outputs into classical bibliographic data frames usable in bibliometrix. The diagram on page 2 shows OpenAlex’s eight interconnected entities—works, authors, institutions, sources, concepts, publishers, funders and geo—while the workflow on page 3 clarifies how openalexR moves from API query to analysable data. Its examples on bibliometrics demonstrate concept retrieval, source ranking, author and institutional profiling, citation-based identification of seminal works, snowball searching and N-gram extraction; the visualisations on pages 7–11 illustrate trends, journal expansion, citation networks and thematic bigrams. The conclusion is that openalexR transforms open research information into executable analytical practice, lowering technical barriers while advancing transparency, reproducibility and non-proprietary bibliometric inquiry. 

Peroni, S. and Shotton, D. (2019) OpenCitations, an infrastructure organization for open scholarship. arXiv:1906.11964v3, pp. 1–24.

Peroni and Shotton present OpenCitations as a direct infrastructural challenge to proprietary citation regimes, arguing that bibliographic citations—directional links through which scholarship acknowledges prior work—should be treated as open, reusable and machine-actionable public knowledge. The paper’s central intervention is both political and technical: citation data locked inside Web of Science, Scopus or similarly restricted platforms impede equitable access, reproducible bibliometrics and accountable research assessment, whereas OpenCitations publishes citation data as Linked Open Data using Semantic Web standards. Its case synthesis is embodied in COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations, which the paper reports as containing over 445 million citations, alongside the OpenCitations Corpus, Open Citation Identifiers, SPAR ontologies, REST APIs, SPARQL endpoints and downloadable CC0 datasets. The diagram on page 9 clarifies the OpenCitations Data Model by showing how bibliographic resources, citations, identifiers, agents, roles and references are semantically interlinked; pages 15–17 then evidence community uptake through access statistics, a global usage map and Figshare download figures. The crucial conceptual move is to treat citations as first-class data entities, rather than mere links, thereby enabling provenance tracking, network analysis, reuse and verification. The conclusion is that open citation infrastructure does not simply improve discovery; it redistributes bibliometric power, making scholarly evaluation less dependent on opaque commercial indexes and more answerable to a global research commons. 

Beard, R. and Kuchma, I. (2016) Innovations in Scholarly Communication – Results from EIFL Countries. EIFL presentation, pp. 1–63.

Beard and Kuchma’s presentation situates contemporary scholarly communication within a proliferating ecology of digital tools, arguing that libraries must no longer confine themselves to collection provision but actively mediate the entire research workflow. Drawing on the 101 Innovations in Scholarly Communication survey, conducted between May 2015 and February 2016, the authors map research as a cycle extending from discovery, analysis and writing to publication, outreach and assessment. The visual workflow on pages 7–11 is especially instructive: it aligns library services—data management plan review, reference-management training, open access repository support, systematic-review assistance, post-publication sharing and metrics advice—with concrete researcher practices. The EIFL case synthesis sharpens this argument through 674 responses from 38 countries, with strong participation from Ukraine, Poland and Ghana, and a disciplinary profile in which social sciences constitute the largest share of EIFL responses. The charts on pages 43–45 expose a familiar disjunction: researchers overwhelmingly support open science in principle, yet comparatively fewer adopt open data and code-sharing tools in practice. This gap defines the library’s strategic mandate. Rather than merely recommending platforms, librarians must inform, train, advise, advocate and co-shape institutional policy, as page 59’s support model proposes. The conclusion is therefore pragmatic and political: libraries become infrastructural translators, converting chaotic tool abundance into equitable, multilingual, low-cost and sustainable research practice across diverse scholarly contexts. 

Barcelona Declaration on Open Research Information (2024) Barcelona Declaration on Open Research Information. 16 April. doi:10.5281/zenodo.10958522.

The Barcelona Declaration on Open Research Information formulates a decisive institutional response to the growing dependence of research systems on proprietary, opaque and commercially governed metadata infrastructures. Its central proposition is that the information used to evaluate researchers, allocate resources, set strategic priorities and trace scientific influence must itself be open, reusable, interoperable and accountable. The Declaration identifies a profound contradiction in contemporary scholarship: institutions often assess open science through closed databases, thereby grounding consequential decisions in evidence that cannot be independently audited, corrected or reproduced. Its four commitments establish a practical architecture of reform: making openness the default for research information used and produced; working only with systems that enable open metadata export through standard protocols and persistent identifiers; sustaining open scholarly infrastructures through community governance and equitable financial support; and coordinating collective action to accelerate transition. The case synthesis is especially clear in the contrast between closed systems such as Web of Science and Scopus, described in Annex A as examples of restricted infrastructures, and open alternatives including Crossref, DataCite, ORCID, OpenAlex, OpenCitations, OpenAIRE, PubMed, Europe PMC, La Referencia, SciELO and Redalyc. Through this contrast, the Declaration reframes metadata as a matter of academic sovereignty rather than administrative convenience. Its conclusion is unequivocal: responsible assessment, multilingual visibility and equitable science policy require open research information as the normative substrate of scholarly governance. 

Guédon, J.-C. (2011) ‘El acceso abierto y la división entre ciencia “principal” y “periférica”’, Crítica y Emancipación, 6, pp. 135–180.

Jean-Claude Guédon’s argument is that open access cannot be adequately understood as a benign improvement in scholarly distribution; it is a structural challenge to the historical machinery through which scientific authority has been concentrated. By mobilising Bourdieu’s notion of the scientific field, Guédon shows that journals, editorial boards, citation indexes and linguistic hierarchies convert technical competence into social power, thereby producing a global division between “principal” and “peripheral” science. The Science Citation Index becomes the exemplary case: by selecting a restricted set of journals, privileging English-language visibility and enabling impact-factor evaluation, it transforms a continuous spectrum of scholarly quality into a rigid boundary between recognised science and obscured knowledge. The article’s synthesis of Indian, Latin American and Venezuelan examples is especially revealing: locally urgent research, such as cholera investigation or regionally significant journals, may be devalued when judged by criteria designed for metropolitan centres, while peripheral researchers are pressured to contribute intellectual labour to agendas validated elsewhere. Against this asymmetry, Guédon identifies SciELO, institutional repositories and subsidised journal infrastructures as practical counter-models, because they strengthen local publishing ecologies without collapsing into provincial isolation. The conclusion is therefore political as much as bibliographic: open access becomes emancipatory only when it redistributes visibility, legitimates multilingual and locally grounded research, and dismantles the cartelised architecture that mistakes selective indexing for universal scientific excellence. 

Borgman, C.L. (2014) Big Data, Little Data, No Data: Scholarship in the Networked World. Presentation, pp. 1–49.

Christine L. Borgman’s Big Data, Little Data, No Data advances a rigorous corrective to technological triumphalism by arguing that data are neither self-evident objects nor neutral by-products of scholarship, but representations of observations, artefacts and phenomena that acquire evidential force only within interpretative, institutional and infrastructural contexts. The presentation’s early contrast between open access policies and diverse disciplinary datasets establishes the central tension: while governments, funders and universities increasingly demand openness, the rights, responsibilities and risks attached to data remain uneven across scientific, social-scientific and humanistic domains. This complexity is crystallised in the repeated proposition that data are not publications, not natural objects, and dependent upon knowledge infrastructures. The visual sequence reinforces this argument: page 23 juxtaposes mice, notebooks, maps, climate models and qualitative field notes to show data’s heterogeneity, while pages 26–28 contrast sensor-network measurements with survey and Twitter-based social science materials. As a case synthesis, Borgman’s sensor-network example demonstrates that reuse requires far more than access: nitrate-distribution readings become scholarly evidence only through metadata, provenance, calibration, classification, repositories and labour. The later discussion of economics, sustainability and libraries extends the claim by showing that research data oscillate between public goods, private goods and common-pool resources. The definitive implication is that open data must be governed as a socio-technical commons: curated, credited, preserved and interpreted through institutions capable of sustaining scholarly memory beyond immediate publication cycles. 

OPERAS (2023) Open scholarly communication for social sciences and humanities. Flyer, pp. 1–2.

OPERAS articulates a mature vision of open scholarly communication in which the Social Sciences and Humanities are not peripheral beneficiaries of European research infrastructure but constitutive agents of its intellectual diversity. As a non-profit organisation gathering more than fifty members across an extensive transnational map, it coordinates services, practices and technologies designed to answer the specific communication needs of SSH researchers within the European Research Area. Its strategic force lies in federation: rather than replacing local resources, OPERAS aggregates them into shared access points where scholars, libraries, publishers, policymakers and civic actors can encounter infrastructures otherwise dispersed by language, discipline or geography. The service ecosystem exemplifies this ambition: metrics platforms strengthen the visibility of open access monographs; GoTriple advances multilingual discovery across publications, datasets, profiles and projects; Pathfinder orients researchers towards appropriate publishing and service providers; and quality-assurance tools such as peer-review metadata increase trust in open access book publishing. The flyer’s first page visually reinforces this European breadth through a map of participating countries, while the second page specifies a practical architecture of analytics, discovery, quality assurance and research-for-society services. As a case synthesis, OPERAS demonstrates that scholarly openness is not reducible to free access; it requires multilingual discovery, transparent evaluation, sustainable publishing models and collaborative platforms linking research with society. Its definitive contribution is therefore infrastructural and cultural: it converts fragmented SSH communication into a federated, trustworthy and socially responsive knowledge commons.