Dr. Ben Goertzel—Striving towards an Autonomous Decentralized and Compassionate Artificial Super Intelligence

Dr. Ben Goertzel is the founder and CEO of SingularityNET, a decentralized blockchain-based Artificial Intelligence (AI) marketplace project. He has described the project as a medium for the creation and the emergence of Artificial General Intelligence (AGI) as well as a way to roll out superior AI-as-a-service to every vertical market and enable everyone in the world to contribute to and benefit from AI.

Blockchain.News managed to catch up with Dr. Goertzel at the Blockshow 2019, in Singapore. In the first part of our interview, we discuss the evolution and the philosophical aspects of AI and AGI.

Evolution of Artificial Intelligence

For many, the concept of machines with the ability to learn and develop as humans, but with the enhanced calculation speed of a computer, is simply terrifying. Will the machines replace us, will we be able to contain them, or will we merge with them and how far away is this future?

While AI does describe the simulation of human intelligence by machines, most AI we encounter in our day to day lives are complex mathematical algorithms, such as Apple’s Siri or Amazon’s Alexa, described as ‘narrow AI’ and are of relatively weak intelligence—capable of performing basic tasks but only within a very specific framework.

Goertzel is aiming higher, essentially trying to birth a much stronger type of AI, also known as artificial general intelligence (AGI)—AI systems with assimilated human learning cognitive abilities. He explained, “AGI refers to an AI that can generalize way beyond what it has been taught and has seen, which means it can imaginatively guess elements about new domains of experience. This is extremely important in the modern world where we are forced to deal with unexpected circumstances all the time.”

Beyond AGI is where things get really exciting and may perhaps present a slightly existential challenge for humanity—artificial superintelligence (ASI). Goertzel said, “Artificial superintelligence is the next step beyond general intelligence. Humans currently have more general intelligence than the software products that are commercially available right now. But humans are by no means the most generally intelligent possible system. I think as AI advances further and further, you’re going to see AI as tremendously smarter than people much as we’re much smarter than monkeys, rats, or bugs. But I mean, to get from where we are now with narrow AI, to AGI, and then to artificial superintelligence, we need to go through quite a series of practical steps.” He continued, “That’s really what we’re engaged with at SingularityNET—the project is how to get through the next steps of the evolution—from where we are now, with fairly simplistic narrow AI towards a powerful general intelligence but also taking care to do it in a way that avoids putting the AI in control of these confused centralized global powerful parties—we want a decentralized general intelligence.”

The Singularity in SingularityNET

As Dr. Goertzel revealed, “The singularity in SingularityNET refers to the future foreseen by Verner Vinge and popularized by Ray Kurzweil—it basically is the moment at which technology starts advancing so fast, it seems effectively instantaneous to the human mind, and this is going to occur by AGI becoming smarter than people. AI will be doing the invention rather than people.”  

Shortly before his death, at a conference in Lisbon, Stephen Hawking warned those in attendance that the development of artificial intelligence might become the “worst event in the history of our civilization.” He was alluding to what is known as the ‘technological singularity.’ Other notable intellectuals of our time including Tesla’s Elon Musk and neuroscientist Sam Harris have also delivered a number of foreboding speeches regarding the innovation, believing it may be the start of our impending doom and will ultimately replace us completely or even worse—simply discard us in the course of an intermediary task as HAL 9000 discarded the lives of the astronauts in favor of completing the mission of the Discovery One.

Goertzel does not share this apocalyptic view but sees an opportunity for humans and machines to evolve together, he said, “AI will almost certainly become far more intelligent than human beings. But there will be a possibility for humans to follow the AI along and effectively fuse their minds with the AI—which Elon Musk, among others, are also working on with his company Neuralink. I would say humans who choose not to fuse with the AI will indeed be, in a sense, left behind as they will no longer be among the smartest beings in this region of the universe.”

Artificial Compassion for Humanity

The fact that AI will become much smarter than people do not necessarily mean that AI is a danger to humans. Goertzel explained, “That all depends on how they are built, what we want is AIs that are compassionately disposed toward human beings. That is also why at SingularityNET we’re so focused on creating a democratically controlled AI mind because if the first true general intelligence is controlled by a military organization or an advertising agency, then this probably isn’t optimal in terms of gathering a beneficial general intelligence to emerge into a compassionate, supermind.”

So how do you put compassion into a machine? How can you teach an AI about empathy and concepts as abstract as love? The reality is that even as humans, we are unable to display or enact these concepts with any real consistency. Goertzel said, “You don’t program empathy into the code of the AI; these things will be learned by the AI. Compassion will emerge within the AI in the course of its interactions with the world—including the humans in the world and the physical world. It’s very similar to a child, you don’t program emotions or compassion into a child; they gain it through interactions with the world.” He added, “So the task of AI is to build a learning system and a self-organizing system that can organize its own mind; its own feelings; and its own compassion in an appropriate way. It is complex, but the internet is complex, your mobile phone is complex, your laptop is complex, I mean, humanity has built many complex things, and these are built by a combination of many complex people working together.”

At the comparison of an AI developing as a child would, I could not help but consider the amount of children that grow up to be sociopathic—often making impulsive decisions or breaking rules with little or no feelings of guilt or wrongdoing. Goertzel admits, “Humanity is certainly a complex mess with aspects that are both positive and negative according to the value systems of various parts of humanity. I think the best we can do is put an AI out there in the world and expose it to the various aspects of humanity and make sure that it’s biased in a positive direction.”

Goertzel himself is a father of four children and a grandfather to one, speaking from experience he said, “Protecting them from all the bad things in the world is something you can only do to a limited extent because eventually, they’re going to go out there and interact with some harsh realities—but you can bias what you expose them to in a positive direction.”

The reality of our future according to Goertzel, is that AGI is coming regardless and what we can do is ensure that it is not solely disposed towards the whims of a powerful central authority and that it is taught compassion for humanity. He said, “AI will be used for military purposes, it will be used for advertising and even crime. We have to make sure that AI is also used, and to a greater extent, for education, agriculture, healthcare or scientific discovery. The AI will get all these things integrated into its mind and be able to form a whole picture of human values and culture to form a substantial inclination towards compassion.”

Sum of Many

Goertzel clarified that the creation of the future AGI global intelligence will not be done solely by his team at SingularityNET, it will be the combined work of a vast community of AI and technology developers as well as the information that the AI agents on the network are able to absorb from human consumers who leverage the network. 

He said, “If Singularity is going to play a key role, then we need to be massively growing the user base of SingularityNET—we have to drive massive adoption of these decentralized networks that we have launched. After two years of work, we have a pretty nice version of the SingularityNET platform out there. It’s a decentralized network which is democratically governed and controlled—meaning the AI network is sort of controlled by the AI agents in the network, rather than by some outside party. It’s a nice bit of software and we’ve shown it works.” Concluding he said, “If humanity wants to transition from AI to AGI and then to super intelligence in a democratic and participatory way then networks like SingularityNET need to be a significant part of the mix, which is easy to see from an abstract view but from practice on the ground, there’s still a lot to be done to get adoption of this sort of platform.”

Understanding Generative AI and Future Directions with Google Gemini and OpenAI Q-Star

As the world of artificial intelligence (AI) continues to evolve at a breakneck pace, recent developments such as Google’s Gemini and OpenAI’s speculative Q-Star project are reshaping the generative AI research landscape. A recent seminal research paper, titled “From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape,” authored by Timothy R. McIntosh, Teo Susnjak, Tong Liu, Paul Watters, and Malka N. Halgamuge provide an insightful overview of the rapidly evolving domain of generative AI. This analysis delves into the transformative impact of these technologies, highlighting their implications and potential future directions.

Historical Context and Evolution of AI

The journey of AI, tracing back to Alan Turing’s early computational theories, has set a strong foundation for today’s sophisticated models. The rise of deep learning and reinforcement learning has catalyzed this evolution, leading to the creation of advanced constructs like the Mixture of Experts (MoE).

The Emergence of Gemini and Q-Star

The unveiling of Gemini and the discourse surrounding the Q-Star project mark a pivotal moment in generative AI research. Gemini, a pioneering multimodal conversational system, represents a significant leap over traditional text-based LLMs like GPT-3 and even its multimodal counterpart, ChatGPT-4. Its unique multimodal encoder and cross-modal attention network facilitate the processing of diverse data types, including text, images, audio, and video.

In contrast, Q-Star is speculated to blend LLMs, Q-learning, and A-Star algorithms, potentially enabling AI systems to transcend board game confines. This amalgamation could lead to more nuanced interactions and a leap towards AI adept in both structured tasks and complex human-like communication and reasoning.

Mixture of Experts: A Paradigm Shift

The adoption of the MoE architecture in LLMs marks a critical evolution in AI. It allows handling vast parameter scales, reducing memory footprint and computational costs. However, it also faces challenges in dynamic routing complexity, expert imbalance, and ethical alignment.

Multimodal AI and Future Interaction

The advent of multimodal AI, especially through systems like Gemini, is revolutionizing how machines interpret and interact with human sensory inputs and contextual data. This transformative era in AI development marks a significant shift in technology.

Speculative Advances and Chronological Trends

The speculative capabilities of the Q-Star project embody a significant leap forward, blending pathfinding algorithms and LLMs. This could lead to AI systems that are not only more efficient in problem-solving but also creative and insightful in their approach.

Conclusion

The advancements in AI, as exemplified by Gemini and Q-Star, represent a crucial turning point in generative AI research. They highlight the importance of integrating ethical and human-centric methods in AI development to align with societal norms and welfare. As we venture further into this exciting era of AI, the potential applications and impacts of these technologies on various domains remain a subject of keen interest and anticipation.

Advancements in Autonomous Driving: Analyzing the Progress of Waymo, Tesla, and Industry Trends

The autonomous vehicle industry, marked by significant technological advancements and challenges, is shaping the future of transportation. A detailed analysis from karpathy.github.io, authored by Andrej Karpathy, a notable AI expert, OpenAI co-founder and former Tesla AI lead, offers a realistic view of this evolving landscape​​.

Partial Automation: The Present Scenario

The journey towards autonomous driving started with partial automation, where AI assists in specific tasks like parking and lane changes. This Level 2 automation, similar to AI tools in other sectors, is not fully autonomous but aids in managing routine driving operations. These systems often outperform humans in tasks like lane following but require human supervision for safety and reliability​​.

Full Automation: Waymo’s Advancements

Waymo, a key player in the autonomous driving field, has achieved a level of full automation in certain urban areas like San Francisco. These fully autonomous vehicles offer services similar to rideshares but without the need for a human driver. Despite this technological achievement, the adoption rate of such services is balanced by public trust levels and awareness​​.

Economic and Social Considerations

The introduction of autonomous vehicles impacts job markets, creating new roles while transforming existing ones. For instance, Waymo’s technology replaces traditional driver roles but creates opportunities in areas like data annotation, remote assistance, and fleet maintenance. This shift in the job landscape reflects a more complex economic impact than mere job displacement​​.

Industry Dynamics

The autonomous driving sector is witnessing a consolidation, with Waymo, Tesla, and other companies like Cruise and Zoox, emerging as key competitors. These companies follow different strategies for achieving autonomous driving at a global scale, with Tesla focusing on a software-centric approach, contrasting Waymo’s hardware-intensive strategy​​.

Future Prospects and Challenges

As per insights from Krazytech, advancements in autonomous driving continue, with significant investments in technologies like LiDAR and AI. The industry, however, faces challenges such as high costs and regulatory hurdles, delaying the widespread adoption of fully autonomous vehicles​​.

Conclusion

The autonomous vehicle industry is evolving, driven by technological advancements and market dynamics. Companies like Waymo and Tesla are at the forefront, each adopting different strategies to achieve global autonomy. The industry faces challenges in public acceptance and regulatory compliance, indicating a gradual transition towards widespread adoption of autonomous vehicles.

Exploring AGI Hallucination: A Comprehensive Survey of Challenges and Mitigation Strategies

A recent comprehensive survey titled “A Survey of AGI Hallucination” by Feng Wang from Soochow University sheds light on the challenges and current research surrounding hallucinations in Artificial General Intelligence (AGI) models. As AGI continues to advance, addressing the issue of hallucinations has become a critical focus for researchers in the field.

The survey categorizes AGI hallucinations into three main types: conflict in intrinsic knowledge of models, factual conflict in information forgetting and updating, and conflict in multimodal fusion. These hallucinations manifest in various ways across different modalities, such as language, vision, video, audio, and 3D or agent-based systems.

The authors explore the emergence of AGI hallucinations, attributing them to factors like training data distribution, timeliness of information, and ambiguity in different modalities. They emphasize the importance of high-quality data and appropriate training techniques in mitigating hallucinations.

Current mitigation strategies are discussed in three stages: data preparation, model training, and model inference and post-processing. Techniques like RLHF (Reinforcement Learning from Human Feedback) and knowledge-based approaches are highlighted as effective methods for reducing hallucinations.

Evaluating AGI hallucinations is crucial for understanding and addressing the issue. The survey covers various evaluation methodologies, including rule-based, large model-based, and human-based approaches. Benchmarks specific to different modalities are also discussed.

Interestingly, the survey notes that not all hallucinations are detrimental. In some cases, they can stimulate a model’s creativity. Finding the right balance between hallucination and creative output remains a significant challenge.

Looking to the future, the authors emphasize the need for robust datasets in areas like audio, 3D modeling, and agent-based systems. They also highlight the importance of investigating methods to enhance knowledge updating in models while retaining foundational information.

As AGI continues to evolve, understanding and mitigating hallucinations will be essential for developing reliable and safe AI systems. This comprehensive survey provides valuable insights and paves the way for future research in this critical area.

HuggingGPT: Bridging AI Models for Advanced General Intelligence

The quest for artificial general intelligence (AGI) has taken a significant stride forward with the introduction of HuggingGPT, a system designed to leverage large language models (LLMs) such as ChatGPT to manage and utilize various AI models from machine learning communities like Hugging Face. This innovative approach paves the way for more sophisticated AI tasks across different domains and modalities, marking a notable advancement towards the realization of AGI.

Developed through a collaboration between Zhejiang University and Microsoft Research Asia, HuggingGPT acts as a controller, enabling LLMs to perform complex task planning, model selection, and execution by using language as a universal interface. This allows for the integration of multimodal capabilities and the tackling of intricate AI tasks that were previously beyond reach.

HuggingGPT’s methodology represents a significant leap in AI capabilities. By parsing user requests into structured tasks, it can autonomously select the most suitable AI models for each subtask and execute them to generate comprehensive responses. This process is not only impressive in its autonomy but also in its potential to continually grow and absorb expertise from various specialized models, hence enhancing its AI capabilities continuously.

The system has undergone extensive experiments, demonstrating remarkable potential in handling challenging AI tasks in language, vision, speech, and cross-modality domains. Its design allows for the automatic generation of plans based on user requests and the utilization of external models, enabling the integration of multimodal perceptual abilities and the handling of complex AI tasks.

However, despite its groundbreaking nature, HuggingGPT is not without limitations. The system’s reliance on the planning capabilities of LLMs means that its effectiveness is directly tied to the LLM’s ability to parse and plan tasks accurately. Additionally, the efficiency of HuggingGPT is a concern, as multiple interactions with LLMs throughout the workflow can result in increased response times. The limited token length of LLMs also poses a challenge in connecting a large number of models.

This work is supported by various institutions and has received acknowledgment for the support from the Hugging Face team. The collaboration and contributions from individuals across the globe underscore the importance of collective efforts in advancing AI research.

As the field of artificial intelligence continues to evolve, HuggingGPT stands as a testament to the power of collaborative innovation and the potential of AI to transform various aspects of our lives. This system not only moves us closer to AGI but also opens up new avenues for research and application in AI, making it an exciting development to watch.

AGI Development: The Heart of Future AI, Zhu Songchun's Vision

In an era where artificial intelligence (AI) would permeate every aspect of our society, the quest for General Artificial Intelligence (AGI) has become a global race, with China positioning itself as a frontrunner. AGI, a type of AI designed to understand, learn, and apply knowledge across a wide range of tasks, stands as the next leap in the evolution of intelligent systems.

During the second session of the 14th Chinese People’s Political Consultative Conference (CPPCC), Zhu Songchun, CPPCC member and director of the Beijing Institute for General Artificial Intelligence, emphasized that the key to mastering AGI lies not just in algorithms and computing power, but in cultivating a ‘heart’ for machines. This metaphorical ‘heart’ represents the development of AI that can interact in a more human-like, empathetic manner, transforming how machines serve society.

The unveiling of “Tongtong,” the world’s first AGI personified as a little girl, at the end of January in Beijing, was a testament to the strides being made. Zhu envisions AGI like Tongtong will eventually become integral to our daily lives, addressing challenges such as elderly care by providing services that go beyond mere functionality to offer compassionate companionship.

Zhu’s focus on talent as a crucial factor in winning the global tech competition is reflected in his efforts to nurture a new generation of AI specialists. Over the past three years, he has initiated AGI experimental classes at Peking University and Tsinghua University, gathering the nation’s brightest young minds. Supported by the Ministry of Education, the “Tong Plan” — a joint doctoral training program in AGI — has expanded to include eight universities, fostering a strategic national force in the field.

As China continues to invest heavily in AI research and development, Zhu’s confidence in a unique technological path suited to the country’s conditions is unwavering. He believes in the safe and beneficial growth of AGI, with the potential to make significant contributions to humanity.

The international community watches closely as China advances its AGI initiatives. With ethical considerations and governance of AI being hotly debated, the development of AGI systems like Tongtong raises important questions about the future relationship between humans and machines.

The integration of AI into various sectors, including finance, healthcare, and transportation, is already underway, with blockchain technology often playing a supportive role in securing AI operations. As AGI progresses, its convergence with blockchain could potentially lead to more robust, transparent, and secure AI applications.

This evolving landscape highlights the need for a multidisciplinary approach to AI development, where technology, ethics, and policy intersect. With figures like Zhu Songchun steering the conversation, the world may be on the cusp of an AI revolution that is as much about the ‘heart’ as it is about the ‘mind’ of the technology we create.

As we continue to observe and report on these developments, it’s clear that AGI represents not just a technological advancement but a paradigm shift in our interaction with machines. The journey toward creating AI with a ‘heart’ is sure to be complex and challenging, yet it’s a journey that could redefine the essence of innovation and cooperation in the digital age.

Scenarios for the Transition to Artificial General Intelligence (AGI)

The transition to Artificial General Intelligence (AGI) has been a topic of great interest and speculation in recent years. Many researchers and industry leaders believe that AGI, which refers to AI systems that can perform all tasks at human levels, may soon become a reality. In a working paper titled “Scenarios for the Transition to AGI,” economists Anton Korinek and Donghyun Suh delve into the economic implications of AGI development.

The paper starts by examining the relationship between technological progress, output, and wages. The authors propose a framework that decomposes human work into atomistic tasks with varying levels of complexity. They argue that advances in technology enable the automation of increasingly complex tasks, potentially leading to the automation of all tasks with the advent of AGI.

One crucial aspect analyzed in the paper is the race between automation and capital accumulation. If automation progresses slowly enough, there will always be enough work for humans, and wages may continue to rise. However, if the complexity of tasks that humans can perform is bounded and full automation is achieved, wages may collapse. The authors also consider the possibility of declines in wages before full automation occurs if large-scale automation outpaces capital accumulation, leading to an oversupply of labor.

The research suggests that the automation of productivity growth can result in broad-based gains in the returns to all factors of production. On the other hand, bottlenecks to growth caused by scarce, irreproducible factors may exacerbate the decline in wages. The authors emphasize the importance of understanding the distribution of tasks in complexity space and its impact on economic outcomes.

While the paper provides valuable insights into the potential consequences of AGI development, it also acknowledges the uncertainties surrounding the transition. The authors highlight that the distribution of tasks in complexity space plays a crucial role in determining the economic outcomes. They consider both unbounded and bounded distributions, with the latter reflecting the finite computational capabilities of the human brain.

Overall, the research by Korinek and Suh contributes to the ongoing discussion about the future of work in the age of AI and automation. By analyzing different scenarios for the transition to AGI, the paper sheds light on the possible effects on output, wages, and human welfare. It serves as a valuable resource for policymakers, researchers, and industry leaders seeking to understand the economic implications of AGI development.

Exit mobile version