Jun 26, 2025

Platforms

Applications and Society

Designing for Safety

Can Machines Collectively Think?

Audio Note

5

Min Read

Abstract

The question of whether machines can reason like humans, first posed by Alan Turing, gains urgency as autonomous and connected computing systems become integral to society. The IEEE International Roadmap on Devices and Systems predicts autonomous systems will transform sectors like mobility, healthcare, and energy, driven by advancements in connectivity such as 6G. These systems must make independent decisions in unpredictable environments, necessitating advanced reasoning beyond current machine learning approaches. Collaborative autonomy, where systems share data to enhance decision-making, promises safer and more reliable outcomes, particularly in safety-critical applications. However, challenges in aggregating conflicting information from diverse systems require new models for data exchange and reasoning to ensure trustworthiness and realize the potential of these technologies.

Summary

The idea that machines could think like humans, introduced by Alan Turing through his exploration of machine intelligence and the imitation test, remains a cornerstone of artificial intelligence. Today, as computing systems grow increasingly autonomous and interconnected, the question extends beyond individual machine reasoning to whether machines can think collectively. This is critical for the sustainable development of future technologies that will reshape society.

The IEEE International Roadmap on Devices and Systems identifies autonomous systems as a major driver of technological and societal progress over the next decade. These systems are expected to revolutionize sectors such as transportation, healthcare, manufacturing, home automation, and energy management, with an impact surpassing previous technological revolutions. The rise of connectivity, particularly with emerging technologies like 6G wireless communication, supports the development of distributed intelligent systems. For example, in autonomous driving, vehicles collaborate with intelligent road infrastructure, sharing real-time data like speed and location to enhance safety and efficiency. This cooperative approach is both safer and more cost-effective than relying solely on onboard vehicle systems.

Autonomous systems are defined as self-governing and adaptive, capable of performing complex tasks in dynamic, unpredictable environments without human intervention. Unlike traditional automated systems that handle repetitive tasks, autonomous systems must navigate uncertainties, such as planning a vehicle’s trajectory in traffic or ensuring a robot safely guides a visually impaired person across a busy intersection. Reliable and efficient decision-making is essential, particularly in safety-critical applications where trustworthiness is paramount. Reasoning, the process of thinking to reach a decision, is a vital component of intelligence alongside learning. While machine learning has dominated research efforts, advanced reasoning has received less attention. Designing autonomous systems requires more than integrating learning components; it involves ensuring systems can adapt and behave reliably as they evolve. Current limitations, such as the inability to guarantee fully safe outcomes in perception systems for autonomous driving, highlight the need for collaborative approaches where systems share information to improve decision-making.

Collaborative autonomy draws inspiration from human group reasoning, where individuals combine knowledge to make better decisions. In social sciences, this is known as collective intelligence, where distributed knowledge is coordinated to leverage diverse skills. However, aggregating conflicting information from multiple sources poses challenges for trustworthy outcomes. Social choice theory, exemplified by Arrow’s impossibility theorem, demonstrates that no aggregation method can simultaneously ensure rational outcomes and fair representation, a finding rooted in majority voting. Similarly, in computing, Leslie Lamport’s Byzantine Generals Problem illustrates the difficulty of achieving consensus in distributed systems when some components provide faulty or conflicting information. Lamport showed that agreement is possible if faulty components are limited to one-third of the total, but the problem becomes unsolvable with more faults or unknown faulty sources. These theories often assume homogeneous systems and majority-based aggregation, which do not fully account for the heterogeneity of modern collaborative systems, where differences in data quality and system capabilities impact outcomes.

Collaborative autonomous systems can be viewed as distributed systems, with autonomous agents communicating and coordinating through data exchange. Traditional distributed computing models, such as communicating finite state machines or synchronous data flow, focus on managing concurrency and predictable data exchange. These models remain relevant for autonomous systems, particularly in cyber-physical applications. However, unlike classical computing, where data represents functional outputs, collaborative autonomous systems exchange “beliefs” (e.g., detecting a pedestrian) supported by “evidence” (e.g., explainable perception outputs). The relevance of data varies by context, making it essential to define which data should be shared and how it contributes to reasoning. Current aggregation methods, often based on majority rules, do not fully address the semantic meaning of data or system heterogeneity. New aggregation rules that account for these factors, along with conditions ensuring coherent outcomes, are needed to enable effective collaborative autonomy.

In conclusion, the rise of autonomous and connected computing systems underscores the need for advanced reasoning and collaborative intelligence. These systems promise transformative impacts on society, particularly in safety-critical applications, but their success depends on reliable decision-making in dynamic environments. Collaborative autonomy offers a path to safer outcomes by enabling systems to share information and reason together. Addressing challenges in aggregating diverse data and ensuring trustworthiness requires innovative approaches to data exchange and decision-making, paving the way for future systems that are both transformational and reliable.

Contact us

Get in touch & let's build something together

Contact us

Get in touch & let's build something together