OpenAI’s Robotic Hand Can Now Solve a Rubik’s Cube One-Handed

This article is contributed by our content partner, Nexchange NOW.

OpenAI, the San Francisco-based artificial intelligence research organization, has hit a new milestone this week.

Since May 2017, we’ve been trying to train a human-like robotic hand to solve the Rubik’s Cube. We set this goal because we believe that successfully training such a robotic hand to do complex manipulation tasks lays the foundation for general-purpose robots. We solved the Rubik’s Cube in simulation in July 2017. But as of July 2018, we could only manipulate a block on the robot. Now, we’ve reached our initial goal.

That’s right. OpenAI’s robotics division has announced that Dactyl, it’s AI-fueled, surprisingly dexterous robotic hand, has finally solved a Rubik’s all on its own.

And it didn’t just solve it, no. Dactyl managed to solve the cube with two of its fingers tied, wearing a rubber glove, and being hit by a stuffed giraffe. OpenAI themselves were quite surprised by the result:

We find that our system trained with ADR (Automatic Domain Randomization) is surprisingly robust to perturbations even though we never trained with them: The robot can successfully perform most flips and face rotations under all tested perturbations, though not at peak performance.

Before anyone starts screaming that this is the dawn of Skynet, however, OpenAI said that Dactyl only managed to solve the cube 60% of the time and “only 20% of the time for a maximally difficult scramble.”

Image via Nexchange NOWOriginal Article: http://www.nexchangenow.com/news/ai/71138/openais-robotic-hand-can-now-solve-a-rubiks-cube-one-handed/

Italy Blocks OpenAI ChatGPT Over Data Breach Concerns

Italy’s data protection agency has temporarily blocked OpenAI’s ChatGPT, an artificial intelligence chatbot, over suspected breaches of data privacy rules. The move comes in response to a recent data breach that the AI platform suffered on March 20. The Italian data watchdog has ordered the immediate limitation of data processing for Italian users by OpenAI, the United States company behind ChatGPT.

In addition to the data breach concerns, the Italian data watchdog has also cited the lack of information for users regarding data collected by OpenAI. The agency noted that there is a lack of legal basis that justifies the mass collection and storage of personal data by the AI as it trained its algorithms. Furthermore, the agency also determined that information given by the AI chatbot doesn’t always reflect real data and there may be inaccuracies in terms of processing personal data.

The Italian data watchdog also highlighted a potential breach of ChatGPT’s own data protection rules. According to the agency, even though ChatGPT limits its use to only people above 13 years old, there is no filter that verifies the user’s age within the application. This means that minors could be exposed to unsuitable content for their developing minds.

Apart from Italy, ChatGPT is also facing criticism and legal action from other parts of the world. The Center for Artificial Intelligence and Digital Policy (CAIDP) filed a complaint against ChatGPT on March 31, attempting to stop the release of powerful AI systems to the masses. The CAIDP described the chatbot as a “biased” and “deceptive” platform which is a risk to public safety and privacy.

ChatGPT, created by OpenAI, is an AI chatbot that uses natural language processing to generate human-like responses. It has gained widespread popularity due to its ability to simulate human-like conversations and generate responses that seem to be personalized to the user. However, this popularity has come with increasing concerns over data privacy and potential misuse of personal data.

OpenAI has stated that it is aware of the concerns raised by the Italian data protection agency and is working to address them. The company has stated that it is committed to protecting user privacy and ensuring that its AI systems are used ethically and responsibly. It has also noted that it is constantly working to improve its systems and address any potential issues or concerns.

In conclusion, the temporary block of ChatGPT in Italy and the legal action against it by the CAIDP highlight the growing concerns over the use of powerful AI systems and their potential impact on privacy and public safety. While AI chatbots like ChatGPT have the potential to revolutionize communication and customer service, it is important that they are used in a responsible and ethical manner, with proper safeguards in place to protect user privacy and prevent misuse of personal data.

Italy Bans Microsoft-Backed AI Chatbot

Italy’s decision to ban the Microsoft-backed AI chatbot, ChatGPT, has caused controversy within the tech industry and the country. The Italian deputy prime minister, Matteo Salvini, criticized the ban as excessive and potentially damaging to national business and innovation.

The ban followed concerns raised by Italy’s national data agency about possible privacy violations and failure to verify the age of users. On Friday, March 31, OpenAI took ChatGPT offline in Italy, making it the first Western country to take measures against the AI chatbot.

Salvini expressed his thoughts on the ban through a post on Instagram, stating that he found the decision of the Privacy Watchdog that forced #ChatGPT to prevent access from Italy disproportionate. He also argued that dozens of services based on artificial intelligence are currently in operation, and therefore, common sense needs to be exercised, as privacy issues concern practically all online services.

Furthermore, Ron Moscona, a partner at the international law firm Dorsey & Whitney and an expert in technology and data privacy, said the ban by the Italian regulators was surprising, as it is unusual to completely ban a service due to a data breach incident.

OpenAI has stated that it adheres to privacy regulations in Europe and is willing to cooperate with Italy’s privacy regulatory body. The company takes measures to minimize personal data when training its AI systems, including ChatGPT, as its goal is for the AI to acquire knowledge about the world, not to obtain information about specific individuals.

While the ban could harm national business and innovation, Salvini hopes that a rapid solution will be found, and ChatGPT’s access to Italy will be restored. “Every technological revolution brings great changes, risks, and opportunities. It is right to control and regulate through international cooperation between regulators and legislators, but it cannot be blocked,” he said.

The AI chatbot is also under scrutiny in other regions worldwide. The Center for Artificial Intelligence and Digital Policy (CAIDP) lodged a complaint against ChatGPT on March 31, intending to prevent the deployment of potent AI systems to the general public. The CAIDP characterized the chatbot as a “biased” and “deceptive” platform that jeopardizes public safety and confidentiality.

In conclusion, the ban on ChatGPT in Italy has created significant controversy within the country and the tech industry. While concerns about privacy and age verification have been raised, the ban has also been criticized as excessive and potentially harmful to national business and innovation. OpenAI has stated that it adheres to privacy regulations in Europe and is willing to cooperate with Italy’s privacy regulatory body. The debate over the regulation of AI chatbots continues worldwide, with concerns about public safety and confidentiality at the forefront.

Japan supports OpenAI amid concerns

OpenAI, the artificial intelligence (AI) company, has received support from Japan amidst a wave of bans by different countries and uncertainties. Japan’s Chief Cabinet Secretary Hirokazu Matsuno announced on April 10 that Japan would consider incorporating AI technology into government systems, including OpenAI’s ChatGPT chatbot, subject to privacy and cybersecurity concerns being addressed.

This announcement followed an alleged data breach on March 20, where Italy’s data protection watchdog temporarily blocked the chatbot on March 31 and directed OpenAI to immediately restrict data processing for Italian users while an investigation is ongoing.

OpenAI CEO, Sam Altman, visited Japan to meet with government officials, including Prime Minister Fumio Kishida. Matsuno expressed his support for OpenAI, stating that the Japanese government would consider adopting its technology if privacy and cybersecurity concerns are addressed.

Altman expressed his enthusiasm about engaging with Japan’s remarkable talent and creating something exceptional for the Japanese people during a press conference in Tokyo. He also mentioned his amazement at the adoption of this technology in Japan.

During his meeting with Kishida, Altman discussed the potential of the technology and how to remove any negative aspects. They also deliberated on how to be cautious about the risks and maximize AI’s benefits for people. OpenAI is considering the possibility of opening an office in Japan and extending Japanese language services.

However, OpenAI is currently being investigated by Canada’s privacy commissioner for allegedly collecting and utilizing personal information without consent. On April 4, the Office of the Privacy Commissioner of Canada announced that the probe was initiated after a complaint from an anonymous individual. Philippe Dufresne, head privacy commissioner, emphasized that his department is closely monitoring AI technology to protect Canadians’ privacy rights.

OpenAI’s technology has been the subject of controversy in different countries. Japan’s expression of support for the company amid these concerns is a positive development for OpenAI’s efforts to expand its operations globally. OpenAI’s commitment to enhancing its models’ proficiency in the Japanese language and its cultural nuances also shows its dedication to providing effective AI services to Japan. However, addressing privacy and cybersecurity concerns is crucial for OpenAI to gain wider acceptance and adoption of its technology.

OpenAI Launches Bug Bounty Program

OpenAI, the artificial intelligence (AI) company behind ChatGPT, has announced the launch of a bug bounty program to combat privacy and cybersecurity concerns. The program rewards security researchers and ethical hackers for identifying and addressing vulnerabilities in OpenAI’s technology and company, with cash rewards ranging from $200 for low-severity findings to $20,000 for exceptional discoveries.

OpenAI has partnered with Bugcrowd, a bug bounty platform, to manage the submission and reward process, ensuring a streamlined experience for all participants. The company has also offered safe harbor protection for vulnerability research conducted in compliance with its specific guidelines. OpenAI believes that expertise and vigilance will play a crucial role in keeping its systems secure and ensuring users’ security.

The launch of the program comes in the wake of recent bans in different countries on AI technology and concerns about privacy and cybersecurity. On March 20, OpenAI suffered a data breach, which exposed user data due to a bug in an open-source library. The incident highlighted the need for increased security measures and prompted OpenAI to launch the bug bounty program.

The global community of security researchers, ethical hackers, and technology enthusiasts have been invited to participate in the program. OpenAI hopes that the initiative will help to identify and address vulnerabilities in its systems and improve its overall security posture.

The program’s rules state that researchers must comply with all applicable laws and regulations, and safe harbor protection is provided for vulnerability research conducted according to OpenAI’s guidelines. If a third party takes legal action against a security researcher who participated in the program and followed the rules, OpenAI will inform others that the researcher acted within the program’s guidelines. This is because OpenAI’s systems are connected with other third-party systems and services.

The launch of the program follows a statement by the Japanese government’s Chief Cabinet Secretary Hirokazu Matsuno, stating that Japan would consider incorporating AI technology into government systems, provided privacy and cybersecurity issues are addressed. OpenAI’s bug bounty program demonstrates the company’s commitment to addressing these concerns and improving its security posture. By inviting the global community of security researchers, ethical hackers, and technology enthusiasts to participate, OpenAI hopes to increase vigilance and expertise, directly impacting the security of its systems and ensuring users’ security.

Italy's Garante Sets Mandates for OpenAI's ChatGPT Service

Italy’s data protection agency, Garante, has issued a set of mandates for OpenAI’s ChatGPT service following concerns raised about possible privacy violations and failure to verify the age of users. The watchdog had suspected the artificial intelligence chatbot service of violating the European Union’s General Data Protection Regulation (GDPR) and had mandated the United States-based firm to halt the processing of data belonging to individuals residing in the country.

In a press release, Garante outlined the actions that OpenAI must take to revoke the order imposed on ChatGPT. The mandates require OpenAI to increase its transparency and issue an information notice comprehensively outlining its data processing practices. The statement also requires OpenAI to implement age-gating measures immediately to prevent minors from accessing its technology and adopt more stringent age verification methods.

OpenAI must specify the legal grounds it relies upon for processing individuals’ data to train its AI, and it cannot rely on contract performance. This means that OpenAI must choose between obtaining user consent or relying on legitimate interests. Currently, OpenAI’s privacy policy references three legal bases but appears to give more weight to the performance of a contract when providing services such as ChatGPT.

Furthermore, OpenAI must enable users and non-users to exercise their rights regarding their personal data, including requesting corrections for any misinformation generated by ChatGPT or deleting their data. The regulatory agency mandated that OpenAI allow users to object to processing their data to train its algorithms. OpenAI is also required to conduct an awareness campaign in Italy to inform individuals that their information is being processed to train its AIs.

Garante has set a deadline of April 30 for OpenAI to complete most of these tasks. OpenAI has been granted additional time to comply with the extra demand of migrating from the existing, age-gating child safety technology to a more resilient age verification system. Specifically, OpenAI has until May 31 to submit a plan outlining the implementation of age verification technology that screens out users under 13 years old (and those aged 13 to 18 who have not obtained parental consent). The deadline for deploying this more robust system is set for Sept. 30.

In response to the mandates, OpenAI has taken ChatGPT offline in Italy. The company has been granted additional time to comply with the age verification technology demands, but must still ensure that it meets the April 30 deadline for other compliance requirements.

OpenAI’s ChatGPT service has gained significant attention for its ability to generate natural language responses that can mimic human conversation. However, concerns have been raised about the impact of such technology on privacy and the potential for misuse, particularly with regards to minors.

This is not the first time that OpenAI has faced regulatory scrutiny. In 2019, the company announced that it would not release a powerful language-generating AI model due to concerns about its potential misuse. The company has since released similar models with additional safeguards in place.

In conclusion, Garante’s mandates for OpenAI’s ChatGPT service aim to ensure compliance with GDPR and protect the privacy of individuals, particularly minors. 

Elon Musk developing AI startup to rival OpenAI

In a move to expand his footprint in the AI industry, tech entrepreneur Elon Musk is reportedly creating a startup to rival one of his own previous ventures, OpenAI. According to the Financial Times, Musk is putting together a team of AI researchers and engineers to develop a new AI company that will compete with OpenAI. While Musk resigned from the board of OpenAI in 2018, the launch of his new AI startup will put him in direct competition with other tech giants like Google and Microsoft.

The report also suggests that Musk is in talks with investors, including existing supporters of SpaceX and Tesla, for investment in the new AI venture. According to a source, “a bunch of people are investing in it, it’s real and they are excited about it.”

This revelation follows a recent report stating that Musk procured almost 10,000 graphics processing units to power Twitter’s AI initiatives. On March 9, Musk also incorporated a company named X.AI, which he listed as the sole director. He changed the name of Twitter to “X Corp” in company filings as part of his plans to create an “everything app” under the “X” brand.

Interestingly, despite Musk’s involvement in AI development, he and over 2,600 other tech leaders and researchers signed an open letter on March 30 calling for a temporary halt on further AI development due to “profound risks to society and humanity.”

In the broader context of AI competition, Amazon Web Services (AWS) has also recently launched its Amazon Bedrock initiative. This will allow AWS users to build generative AI from foundation models.

Overall, Musk’s new AI venture will undoubtedly be one to watch in the coming months. As one of the most well-known and influential tech entrepreneurs of our time, his latest AI startup will undoubtedly capture the attention of the industry and the wider public.

Elon Musk Plans to Launch AI Startup to Compete with OpenAI

According to recent reports, Elon Musk is planning to launch an AI startup to compete with OpenAI, one of the most popular generative AI companies that he co-founded in 2015. Musk is reportedly assembling a team of AI researchers and engineers for the new venture and is in talks with existing investors from SpaceX and Tesla for investment. The new AI startup will place Musk among other tech giants, such as Google and Microsoft, in the race to build next-gen AI.

The alleged findings complement recent reports that Musk has been procuring nearly 10,000 graphics processing units to power Twitter’s AI initiatives. In addition, Musk has incorporated a company named X (X.AI) and changed the name of Twitter to “X Corp” in company filings as part of his plans to create an “everything app” under the “X” brand.

However, it is worth noting that Musk and more than 2,600 tech leaders and researchers signed an open letter urging a temporary pause on further AI development on March 30, citing “profound risks to society and humanity.”

Meanwhile, Amazon Web Services (AWS) has launched the Bedrock initiative to allow its users to build generative AI from foundation models. This move by AWS is another sign that tech companies are heavily investing in AI and trying to develop their own AI capabilities.

In conclusion, Elon Musk is reportedly launching an AI startup to compete with OpenAI, and is in talks with existing investors from SpaceX and Tesla for investment in the new venture. Musk’s recent incorporation of X (X.AI) and his plans to create an “everything app” under the “X” brand suggest that he is making significant investments in AI. Additionally, AWS’s Bedrock initiative highlights the ongoing efforts by tech companies to develop their own AI capabilities. While AI holds enormous potential for advancing society, it is crucial that the development of this technology is done in a responsible and ethical manner to minimize potential risks to society and humanity.

Senator Michael Bennet Urges Tech Giants to Curb AI-Generated Misinformation

U.S. Senator Michael Bennet from Colorado has today called on leaders of prominent technology and artificial intelligence (AI) companies, including Meta, Alphabet, Microsoft, Twitter, TikTok, and OpenAI, to implement proactive strategies to combat the proliferation of misleading AI-generated content.

Bennet emphasized the need for identifying and labeling AI-generated content, highlighting the potential risks associated with the unchecked spread of misinformation. He stated, “Online misinformation and disinformation are not new. But the sophistication and scale of these tools have rapidly evolved and outpaced our existing safeguards.”

The Senator pointed out several instances where AI-generated content caused market turmoil and political unrest. He also cited the testimony of OpenAI CEO Sam Altman before the Senate Judiciary Committee, where Altman identified the potential of AI to spread disinformation as a serious concern.

Bennet acknowledged the initial steps taken by technology companies to identify and label AI-generated content. However, he stressed that these measures are voluntary and can be easily bypassed. He proposed a framework for labeling AI-generated content and requested the companies to provide their identification and watermarking policies and standards.

The Senator concluded, “Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality.”

Bennet has been a strong advocate for digital regulation, youth online safety measures, and enhanced protections for emerging technologies. He recently introduced the Digital Platform Commission Act, the first legislation in Congress to create a dedicated federal agency for overseeing large technology companies and protecting consumers.

This move by Senator Bennet underscores the growing concern about the misuse of AI technology and the urgent need for regulatory measures to ensure its responsible use. It remains to be seen how the tech giants will respond to this call for action.

OpenAI Launches GPT-4 API: A New Era of Chat-Based AI

A major advance in the development of artificial intelligence technology has been signaled by OpenAI’s announcement that its long-awaited GPT-4 API is now available to all paying clients. This event marks a key milestone in the advancement of AI technology.

This change comes following the very successful debut of the ChatGPT API in March, as well as subsequent improvements to the chat-based models.

Since its release in March, the GPT-4 API, which is being hailed as OpenAI’s most competent model, has been seeing significant demand.

Access to the GPT-4 API with 8K context is now available to all previously registered API developers who have a track record of successfully processed payments as of right now.

It is OpenAI’s intention to broaden access to new developers by the end of this month, which will be followed by an expansion of rate limitations in accordance with the availability of compute resources.

In addition to GPT-4, OpenAI is making the GPT-3.5 Turbo, DALL’E, and Whisper APIs accessible to the general public. This action reflects the fact that these models are now ready to be used on a commercial scale.

The conversation Completions API is currently responsible for 97% of OpenAI’s total API GPT consumption, which indicates that the firm is changing its attention away from text completions and toward conversation completions.

OpenAI intends to continue investing in the Chat Completions API since the company is certain that it will provide developers with an experience that is both more competent and simpler to use.

Because OpenAI strongly advises users to switch to the Chat Completions API, this notification serves as a warning that previous versions of the Completions API will soon be phased down.

The business foresees a future in which chat-based models may support every use case, ushering in a new age for the development of artificial intelligence.

Exit mobile version