Table of Contents
Autonomous AI Digital Twins
Autonomous AI Digital Twins are among the most consequential — and least understood — developments in the management of complex socio-technical systems. They have moved from passive simulation tools into self-adapting agents capable of monitoring, predicting, prescribing, and in some configurations, acting without direct human instruction. The governance implications of this shift are not marginal. They are foundational.
What makes this moment particularly pressing is the convergence of three forces: the maturation of AI systems capable of goal-directed behaviour, the proliferation of sensor and connectivity infrastructure that generates the data these systems require, and the urgency of governance challenges — in climate adaptation, supply chain resilience, and public health — that exceed the response capacity of conventional human-led institutions. Autonomous AI Digital Twins are being positioned as the answer. This article examines whether the frameworks being developed to govern them are equal to the task.
Socio-Technical Systems and the Governance Imperative
Socio-technical systems (STS) are characterised not by the dominance of either their social or technical components, but by the emergent properties that arise from their interaction. The concept originates in the organisational theory of Trist and Bamforth (1951), whose studies of British coal mining demonstrated that technical redesign without attention to social organisation consistently produced suboptimal outcomes. The implication — that optimising technical elements in isolation degrades system performance — has since been formalised across numerous disciplines, from software engineering (Baxter and Sommerville, 2011) to urban planning and public health governance.
Modern STS are considerably more complex than the industrial settings Trist and Bamforth studied. Smart cities integrate sensor networks, mobility systems, energy grids, public administration, and millions of individual actors whose behaviours are only partially predictable. Global supply chains span multiple regulatory jurisdictions, cultural contexts, and institutional frameworks simultaneously. Public health networks must coordinate clinical, logistical, governmental, and behavioural systems under conditions of acute uncertainty. In each case, the defining challenge is not data scarcity but the difficulty of deriving reliable, ethically defensible decisions from data at scale and in real time.
This is precisely where autonomous digital twins enter the frame. Bonetti et al. (2024) argue that digital twins of socio-technical ecosystems can “open up new ways of achieving common societal goals by providing an understanding of complex interactions and processes, and by facilitating the design of and participation in collective actions” (p. 275). The argument is persuasive, but it carries a condition: these systems must be governed as carefully as they govern. Without rigorous governance frameworks, autonomous digital twins risk embedding the biases, blindspots, and institutional priorities of their designers into the operations of systems that affect human welfare at scale.
Fatade et al. (2026) frame the urgency directly, observing that the potential for fully autonomous AI-enabled digital twins to support the modelling, monitoring, and governance of complex socio-technical systems “integrating physical infrastructure, human actors, and institutional frameworks is rapidly expanding” (p. 1). The scholarly community has responded with a body of framework literature — comparative, empirical, and theoretical — that this article synthesises. The aim is not to survey that literature exhaustively, but to identify the architectural patterns, governance mechanisms, and implementation strategies that have demonstrated coherence across diverse application domains.
From Passive Replica to Autonomous Agent
The evolution of digital twin technology is not incremental. It represents a categorical shift in the relationship between a system and its model. Grieves and Vickers (2017) articulated the foundational concept of the digital twin as a virtual counterpart to a physical asset, maintaining a live synchronised relationship through bidirectional data flows. Early implementations — principally in aerospace and manufacturing — confined DTs to monitoring and simulation functions. The twin observed; decisions remained with human operators.
Tao et al. (2018, 2019) documented the subsequent expansion of DT capabilities into predictive maintenance and process optimisation, enabled by advances in sensor technology, cloud computing, and machine learning. The descriptive twin became a predictive one. Yet the transition from predictive to prescriptive — and from prescriptive to autonomous — required something more than incremental computational improvement. It required the integration of AI systems capable of goal-directed behaviour, self-adaptation, and decision-making under uncertainty.
Fatade et al. (2026) define this advanced paradigm as incorporating “hyper-predictive and prescriptive capabilities” aligned with AI principles of adaptive intelligence, goal-directed behaviour, and bounded autonomy (p. 2). Nechesov et al. (2025) extend the vision further, proposing virtual cities as digital twins that progress from simulation environments into autonomous AI societies — entities capable of modelling not only the physical and structural dimensions of urban systems but their cognitive, cultural, and economic dynamics. In this framing, the digital twin is no longer a mirror of the real system. It is a parallel socio-technical entity capable of contributing to governance in its own right.
This progression creates the central tension of the field. An autonomous DT that can act — that can reallocate resources, modify parameters, trigger interventions — is a governance instrument of considerable power. Understanding how that power is constrained, legitimised, and held accountable is the animating question across all the frameworks reviewed here. Research teams building AI-assisted decision intelligence at Twinlabs have consistently identified governance design as at least as consequential as technical architecture in real-world deployment contexts.
Miadowicz et al. (2026) trace the trajectory from conventional DTs to fully autonomous socio-technical systems, emphasising that the transition is not simply about increasing computational capability. It is about redefining the human-machine relationship in governance contexts where the stakes — equity, accountability, resilience — are not abstract but immediate.
Core Architectural Paradigms
Two dominant architectural paradigms govern the design of autonomous AI-enabled DTs. Understanding their respective strengths and constraints is essential for selecting an appropriate framework for any given STS governance context.
Centralised vs. Decentralised Approaches
Centralised architectures maintain a unified data repository and processing core, enabling global optimisation across the system. Their advantage is coherence: a single model can reason across all available data without the latency or inconsistency risks of distributed processing. Their vulnerability is equally clear. A centralised DT creates a single point of failure, concentrates governance authority, and generates significant privacy risks when applied to systems involving human actors — which all socio-technical systems do.
Decentralised architectures distribute intelligence across modular nodes using multi-agent systems, edge computing, and federated learning (McMahan et al., 2017; Li et al., 2020). Fatade et al. (2026) argue that decentralised designs prove superior in distributed STS, reducing latency and preserving privacy. Chen et al. (2025, as cited in Fatade et al., 2026) demonstrate this in energy grid management, where federated learning allows individual nodes to adapt locally without centralising sensitive consumption data. The cost is coordination overhead: ensuring coherent behaviour across autonomous agents requires explicit governance mechanisms at the network level. The multi-agent systems literature — Wooldridge and Jennings (1995) remains the foundational text — provides the theoretical grounding for managing this coordination problem.
Model-Driven Engineering Architectures
Bonetti et al. (2024) advance a model-driven engineering (MDE) architecture that integrates heterogeneous modelling languages into a unified DT ensemble. The architecture draws on SysML for technical system layers, UML for structural representations, and User Requirements Notation (URN) for social goals and stakeholder objectives. Services within this architecture include Monitoring (metric extraction from live data), Prediction (AI-powered simulation of future states), Recommendation (activity proposals for human or automated action), and Display (outputs designed for human interpretation, including motivational strategies grounded in self-determination theory as articulated by Deci and Ryan, 2000).
Kalaboukas et al. (2023) propose a network-based architecture for supply chain cognitive digital twins (SC-CDTs), where individual DTs collaborate under a shared governance framework. This modular design supports hierarchical composition: a global SC-CDT orchestrates local DTs representing individual actors, assets, and processes across the supply network. The architecture aligns with the interoperability requirements that Bonetti et al. (2024) identify as essential for cross-domain governance, and with the broader cyber-physical systems design principles articulated by Lee (2008) and Rajkumar et al. (2010).
The two approaches are not mutually exclusive. The most robust implementations combine physics-informed modelling — which maintains domain validity against first principles — with data-driven learning for behavioural adaptation. This hybrid approach, which Fatade et al. (2026) identify as the current technical frontier, is particularly relevant to the AI twin and decision intelligence architectures that Twinlabs has been developing for multi-domain deployment scenarios.
Governance Protocols and Ethical Constraints
Autonomy without governance is not a design achievement. It is a liability. This principle appears consistently across the framework literature, and it carries specific implications for how DT governance must be structured in socio-technical contexts.
Fatade et al. (2026) identify three categories of governance risk unique to autonomous DTs: bias amplification, where AI systems encode and reproduce the distributional inequities present in their training data; overreliance, where human operators cede judgement to systems that warrant scrutiny; and accountability erosion, where automated decision chains obscure the locus of responsibility when outcomes are harmful. Each risk has a governance response — audit mechanisms, human-in-the-loop protocols, and transparent decision logs — but implementing these responses requires institutional commitment, not just technical design.
Kalaboukas et al. (2023) structure governance around three interdependent views. The business and sustainability view defines system priorities, including circular economy objectives, social impact metrics, and stakeholder equity considerations. The data governance view establishes policies for data sharing, access, and processing across the DT network. The AI models governance view specifies rules for model training, validation, and alignment with collaborative governance terms. Together, these views operationalise what Jobin, Ienca, and Vayena (2019) identify as the core principles of AI ethics — transparency, justice, non-maleficence, responsibility, and privacy — within an institutional framework adapted to DT contexts.
Floridi et al. (2018) provide a complementary structure in the AI4People ethical guidelines, which emphasise beneficence, non-maleficence, autonomy, justice, and explicability as foundational principles for AI systems affecting human welfare. In the DT governance context, explicability is particularly critical: a system whose reasoning is opaque cannot be held accountable, and cannot generate the institutional trust required for legitimate governance authority. Arrieta et al. (2020) survey explainable AI (XAI) methods relevant to this challenge, demonstrating that interpretability and predictive performance are not inherently opposed — a finding that significantly strengthens the case for XAI integration in autonomous DT design.
The AI governance frameworks developed at AI Coach engage directly with these institutional challenges, particularly the question of how organisations build meaningful AI oversight capacity without requiring deep technical expertise in every function. The insight is directly applicable to DT governance contexts: governance competence must be built at the institutional level, not assumed from technical design alone.
Bonetti et al. (2024) integrate motivational strategies — gamification mechanisms including points, badges, and leaderboards — into the governance architecture, drawing explicitly on self-determination theory (Deci and Ryan, 2000) to foster intrinsic human participation. The design philosophy is explicitly human-centred: autonomy is bounded, fallback mechanisms are made explicit, and human agency is reinforced rather than replaced. This approach reflects a broader design consensus emerging across the field — that the most defensible autonomous DT architectures are those that preserve meaningful human involvement even as they reduce routine human workload.
A Spectrum of Autonomy: Levels and Implementation Pathways
The concept of autonomy in DT systems is not binary. Fatade et al. (2026) delineate four distinct levels: monitoring (descriptive), predictive, prescriptive, and fully autonomous. These levels are mapped across a two-dimensional matrix against human involvement — ranging from high to low — and application domain, from healthcare to smart cities. The matrix provides a practical instrument for implementation planning, allowing governance architects to specify the appropriate autonomy level for each function within a complex STS.
At the monitoring level, the DT replicates and reports. Human interpretation and decision-making remain fully intact. At the predictive level, AI simulation generates probabilistic forecasts that human actors can act upon or disregard. At the prescriptive level, the DT recommends specific interventions, and the boundary between human and machine judgement begins to blur. At full autonomy, the DT acts — subject to defined constraints and governance mechanisms — without real-time human instruction.
Implementation best practice is not to begin at full autonomy and work backwards. It is to begin at monitoring or prediction, build institutional understanding of the system’s behaviour and limitations, and escalate autonomy levels as trust and governance capacity develop in parallel. This graduated approach is consistent with the human-in-the-loop design principles documented by Monarch (2021) and with the broader AI governance arguments advanced by Russell (2019) and Dafoe (2018), both of whom emphasise that the pace of capability development must be matched by equivalent progress in governance infrastructure.
Bonetti et al. (2024) operationalise this phased approach through a three-stage model: co-design (collaborative planning with stakeholders), co-production (iterative development with continuous feedback), and co-delivery (deployment with embedded evaluation metrics). The model is explicitly participatory: governance legitimacy requires the involvement of the communities that autonomous DTs will affect. This principle extends beyond academic design theory — it is a practical implementation requirement. Practitioners navigating phased AI system deployment will find the step-by-step implementation frameworks at 1 Hour Guide a useful resource for translating these staged approaches into actionable operational planning.
Kalaboukas et al. (2023) operationalise autonomy levels through formal governance agreements within SC-CDT configurations, specifying in advance the conditions under which each DT may act, the scope of its authority, and the escalation protocols when decisions exceed its defined autonomy envelope. This contractual approach to AI autonomy is practically robust: it makes governance commitments explicit, auditable, and revisable as system performance data accumulates.
Empirical Evidence: What the Case Studies Reveal
The frameworks reviewed here are grounded in empirical application. The case studies demonstrate both the practical value of autonomous DTs and the governance challenges that emerge under operational conditions.
Bonetti et al. (2024) present two instructive cases from northern Italy. In Trentino-South Tyrol, a DT was deployed to simulate carpooling interventions for rural mobility. AI prediction models estimated participation rates across demographic segments, and gamification mechanisms were applied to overcome behavioural barriers in low-connectivity communities. The outcome was measurable improvement in transport access — but the governance insight was equally significant: the motivational design was as important as the predictive accuracy. A technically excellent DT that could not enlist participation could not generate impact. This finding aligns with the self-determination theory framework of Deci and Ryan (2000), which distinguishes sharply between extrinsic compliance and intrinsic engagement as determinants of sustained behavioural change.
The second case, in Ferrara, addressed food waste reduction. Sensor data from multiple points in the supply and consumption chain identified waste patterns; the DT recommended redistribution hub collaborations and applied motivational rewards to drive behavioural change among participants. The case demonstrates that autonomous DTs can coordinate collective action across institutional boundaries — a capability with significant implications for SDG-aligned governance at municipal and regional scales.
Kalaboukas et al. (2023) illustrate a circular economy case in refrigerator supply chain management. The SC-CDT integrated sensor data from product lifecycles, analytics from maintenance and repair histories, and distributed technician network inputs to autonomously optimise decisions about component reuse, refurbishment, and end-of-life processing. The governance achievement was the alignment of technical optimisation with sustainability commitments defined at the business governance level: the system was not simply efficient, it was accountable to a defined set of values.
Fatade et al. (2026) synthesise cross-domain evidence from energy and healthcare applications, demonstrating that decentralised architectures consistently enable more resilient governance than centralised ones in geographically distributed systems. The finding aligns with theoretical arguments advanced by Wooldridge and Jennings (1995) on multi-agent system design, and with the empirical literature on federated learning in privacy-sensitive domains (McMahan et al., 2017). Nechesov et al. (2025) extend the empirical frame to urban simulation, proposing virtual city environments in which autonomous DTs model economic, cultural, and scientific dynamics with fidelity sufficient to generate actionable governance intelligence about system behaviour under stress conditions.
Across these cases, measurable gains in resilience, efficiency, and equity are consistently associated with frameworks that align technical autonomy with explicit socio-technical governance constraints — not frameworks that prioritise technical performance while treating governance as a compliance afterthought.
Unsolved Challenges and Mitigation Strategies
The frameworks are robust. The challenges are real. Three warrant particular attention: fragmentation, scalability, and trust.
Fragmentation arises when DTs designed for specific domains or organisations cannot interoperate, creating governance blind spots at system boundaries. In a global supply chain, a fragmentation failure at the intersection of a manufacturing DT and a logistics DT can produce catastrophic errors in resource allocation. The mitigation requires standards-based interoperability, and the model-driven engineering approaches of Bonetti et al. (2024) offer the most practically advanced methodology currently available. Without common representational standards, the promise of cross-domain governance cannot be realised regardless of how sophisticated individual DTs become.
Scalability becomes problematic as STS complexity increases. A DT that governs effectively at city-district scale may lose coherence at metropolitan scale, where the number of interacting agents, feedback loops, and institutional interdependencies grows non-linearly. Hybrid modelling — combining physics-informed models for structural stability with data-driven models for behavioural adaptation — is the current best response, though its scalability limits are not yet fully characterised in the literature. Researchers working on scalable AI decision intelligence at Twinlabs are actively investigating these boundaries in multi-domain deployment scenarios, and their findings are likely to inform the next generation of governance framework design.
Trust is the most consequential challenge and the least amenable to technical resolution. An autonomous DT earns institutional trust through demonstrated performance, transparent reasoning, and consistent alignment with governance values. It loses trust through unexplained decisions, miscalibrated predictions, or behaviour that appears to serve system efficiency at the expense of human welfare. Arrieta et al. (2020) demonstrate that explainable AI methods can significantly improve the interpretability of autonomous system decisions without compromising predictive performance — a finding that should be treated as a design baseline, not an optional enhancement. The AI governance and advisory resources at AI Coach address the institutional dimension of this challenge directly, providing guidance on how organisations build oversight competence sufficient to hold autonomous systems meaningfully accountable.
A fourth challenge, less discussed in the framework literature but equally significant in practice, is regulatory uncertainty. Most jurisdictions do not yet have governance frameworks specifically designed for autonomous AI systems in socio-technical governance roles. The European Union’s AI Act represents the most advanced legislative effort in this space, classifying high-risk AI applications — a category that encompasses many DT governance contexts — under mandatory conformity assessment requirements. The absence of equivalent frameworks in other major jurisdictions creates compliance asymmetries that complicate cross-border deployment.
Future Trajectories
The near-term trajectory of autonomous DT research points in several directions simultaneously. Nechesov et al. (2025) anticipate quantum-enhanced simulation capabilities that will dramatically increase the fidelity and speed of DT modelling, enabling real-time governance of systems whose complexity currently exceeds available computational limits. Swarm DT architectures — networks of relatively simple autonomous agents whose collective behaviour produces sophisticated governance outcomes — are emerging as a viable alternative to both centralised and conventional decentralised designs, with particular promise for resource-constrained deployment environments.
The integration of DTs with immersive virtual environments opens possibilities for stakeholder participation in governance processes that have no physical-world equivalent. Citizens, communities, and institutions could interact with DT representations of proposed interventions before implementation, generating governance legitimacy through participation rather than assumption. Bonetti et al. (2024) envision this as a pathway toward DT-enabled contribution to the UN Sustainable Development Goals, where co-creation between digital and physical systems produces equitable outcomes at scale.
Fatade et al. (2026) call for an integrated research agenda prioritising hybrid modelling approaches, human-centred and ethical design, interoperability standards, and robust governance frameworks. This agenda is not aspirational — it is a practical response to governance failures that autonomous systems have already produced in less carefully designed contexts. The most instructive failures in AI governance to date have occurred not in systems that attempted autonomous governance with too little technical capability, but in systems deployed at scale before governance frameworks were established. The DT governance literature is, at its best, an attempt to invert that sequence: to build governance capacity before capability outpaces it.
Agentic AI societies — multi-DT systems where individual agents collaborate, negotiate, and adapt to produce emergent governance outcomes — represent the outer boundary of current theoretical development (Nechesov et al., 2025). Their implementation at meaningful scale remains a research objective rather than an operational reality, but the conceptual groundwork being laid now will determine whether that implementation is responsible. For practitioners and researchers building foundational understanding of AI system deployment in complex environments, the practical AI frameworks and guides at 1 Hour Guide provide an accessible entry point to the concepts that underpin the more specialised governance literature reviewed here.
Regulatory sandboxes — controlled environments in which autonomous DT governance frameworks can be tested before deployment at scale — are gaining traction as a policy instrument. Within the European AI Act framework, such sandboxes may become mandatory for high-risk AI applications, enabling the empirical testing of governance frameworks against defined performance metrics before consequential deployment. The broader adoption of this instrument across jurisdictions would significantly accelerate the development of evidence-based governance standards.
The Bottom Line
Autonomous AI-enabled digital twins are not, in themselves, governance solutions. They are governance instruments. The distinction matters. An instrument requires skilled operators, institutional frameworks, and accountability mechanisms that exist independently of the instrument itself. The frameworks reviewed here — from Bonetti et al.’s model-driven engineering approach to Kalaboukas et al.’s three-view governance model to Fatade et al.’s autonomy spectrum — collectively demonstrate that technical architecture and governance architecture must be designed together. Neither compensates for deficits in the other.
The implication for researchers and practitioners is clear. The most technically sophisticated autonomous DT deployed without a coherent governance framework will produce outcomes that are efficient, accountable to no one, and liable to generate the kind of institutional harm that undermines trust in automated governance for a generation. The less sophisticated system, governed well, will produce outcomes that are slower, noisier, and infinitely more defensible. The literature reviewed here does not suggest that technical excellence is irrelevant. It establishes that governance excellence is the binding constraint.
The field is young. The case studies are promising. The challenges are significant and well-characterised in the literature. The next generation of research needs to generate not just better models but better institutions — the regulatory frameworks, professional standards, and governance competencies that will determine whether autonomous AI-enabled digital twins fulfil their considerable potential for socio-technical stewardship.
References
- Arrieta, A.B. et al. (2020) ‘Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI’, Information Fusion, 58, pp. 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
- Baxter, G. and Sommerville, I. (2011) ‘Socio-technical systems: From design methods to systems engineering’, Interacting with Computers, 23(1), pp. 4–17. https://doi.org/10.1016/j.intcom.2010.07.003
- Bonetti, F. et al. (2024) ‘Digital twins of socio-technical ecosystems to drive societal change’, in ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (MODELS Companion ’24). New York: Association for Computing Machinery, pp. 275–286. https://doi.org/10.1145/3652620.3686248
- Dafoe, A. (2018) AI Governance: A Research Agenda. Oxford: Future of Humanity Institute. Available at: https://www.fhi.ox.ac.uk/govaiagenda
- Deci, E.L. and Ryan, R.M. (2000) ‘The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior’, Psychological Inquiry, 11(4), pp. 227–268. https://doi.org/10.1207/S15327965PLI1104_01
- Fatade, A. et al. (2026) ‘Autonomous AI-enabled digital twins for socio-technical systems: architectures, autonomy levels, and governance – a comparative review’, Journal of Computer, Software, and Program, 3(1), pp. 1–8. https://doi.org/10.69739/jcsp.v3i1.1550
- Floridi, L. et al. (2018) ‘AI4People — An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations’, Minds and Machines, 28(4), pp. 689–707. https://doi.org/10.1007/s11023-018-9482-5
- Grieves, M. and Vickers, J. (2017) ‘Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems’, in Kahlen, F.J., Flumerfelt, S. and Alves, A. (eds.) Transdisciplinary Perspectives on Complex Systems. Cham: Springer, pp. 85–113.
- Jobin, A., Ienca, M. and Vayena, E. (2019) ‘The global landscape of AI ethics guidelines’, Nature Machine Intelligence, 1(9), pp. 389–399. https://doi.org/10.1038/s42256-019-0088-2
- Kalaboukas, K. et al. (2023) ‘Governance framework for autonomous and cognitive digital twins in agile supply chains’, Computers in Industry, 146, p. 103857. https://doi.org/10.1016/j.compind.2023.103857
- Lee, E.A. (2008) ‘Cyber physical systems: Design challenges’, in 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC). pp. 363–369. https://doi.org/10.1109/ISORC.2008.25
- Li, T. et al. (2020) ‘Federated learning: Challenges, methods, and future directions’, IEEE Signal Processing Magazine, 37(3), pp. 50–60. https://doi.org/10.1109/MSP.2020.2975749
- McMahan, H.B. et al. (2017) ‘Communication-efficient learning of deep networks from decentralized data’, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS). pp. 1273–1282.
- Miadowicz, M. et al. (2026) ‘Evolution of digital twins toward autonomous socio-technical systems’, arXiv preprint (referenced via conceptual evolution frameworks in related reviews).
- Monarch, R.M. (2021) Human-in-the-Loop Machine Learning. Shelter Island: Manning Publications.
- Nechesov, A. et al. (2025) ‘Virtual cities: from digital twins to autonomous AI societies’, IEEE Access, 13, pp. 13866–13903. https://doi.org/10.1109/ACCESS.2025.3531222
- Rajkumar, R. et al. (2010) ‘Cyber-physical systems: The next computing revolution’, in Proceedings of the 47th Design Automation Conference. New York: ACM, pp. 731–736. https://doi.org/10.1145/1837274.1837461
- Russell, S. (2019) Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.
- Sutton, R.S. and Barto, A.G. (2018) Reinforcement Learning: An Introduction. 2nd edn. Cambridge: MIT Press.
- Tao, F. et al. (2018) ‘Digital twin-driven product design, manufacturing and service with big data’, International Journal of Advanced Manufacturing Technology, 94(9–12), pp. 3563–3576. https://doi.org/10.1007/s00170-017-0233-1
- Tao, F. et al. (2019) ‘Digital twin in industry: State-of-the-art’, IEEE Transactions on Industrial Informatics, 15(4), pp. 2405–2415. https://doi.org/10.1109/TII.2018.2873186
- Trist, E.L. and Bamforth, K.W. (1951) ‘Some social and psychological consequences of the Longwall method of coal-getting’, Human Relations, 4(1), pp. 3–38. https://doi.org/10.1177/001872675100400101
- Wooldridge, M. and Jennings, N.R. (1995) ‘Intelligent agents: Theory and practice’, The Knowledge Engineering Review, 10(2), pp. 115–152. https://doi.org/10.1017/S0269888900008122
Further Reading
- Twinlabs — AI Decision Intelligence and Digital Twin Research
- AI Coach — Governance and Advisory Frameworks for AI Systems
- 1 Hour Guide — Practical Frameworks for AI Implementation