Ex-Cambridge Analytica Employee Confident that Blockchain Can Help Protect Personal Data

Brittany Kaiser, the Cambridge Analytica scandal whistleblower, discussed in an interview at the World Economic Forum that blockchain technology could be an essential tool to address data protection issues.  

The Cambridge Analytica scandal broke out in 2016 when it revealed that the data of over 87 million Facebook users have been reaped through a personality quiz. Cambridge Analytica was involved in US President Donald Trump’s election campaign, as well as the Leave.EU Brexit campaign, where Kaiser appeared on behalf of the firm at a press conference. The consultancy firm said that they never signed a contract to work with the British campaign, Kaiser alleged that some work was done to an extent. 

Kaiser is now serving as the co-founder of Own Your Data Foundation, a digital intelligence startup. In her interview, she said, “In my opinion, it’s really blockchain tech and blockchain entrepreneurs that are going to solve a lot of problems of the data protection crisis.” 

Kaiser believes that personal data is one of the most valuable assets in the world, and blockchain can help people to protect their personal data. She added that Germany and Switzerland are currently two of the best countries in terms of data protection. On the other hand, the United States, her native country has had almost no data legislation or regulation. She concluded that the US, however, has been working on data privacy initiatives.  

 

Samsung Launches New Secure Element Chip to Enhance Data Protection for Crypto Transactions

South Korean tech giant Samsung has announced a new revolutionary turnkey security solution to secure cryptocurrency transactions on its smartphones and tablets.

Cryptocurrency transactions are one of the primary purposes of Samsung’s new Secure Element chip, which is expected to be available in Q3 2020. 

The solution involves a Secure Element (SE) chip S3FV9RR, which is Common Criteria Evaluation Assurance Level (CC EAL) 6+ certified. The new SE chip along with enhanced software is designed to offer higher protection for tasks including booting, isolated storage, mobile payment, and other applications. 

Dongho Shin, the senior vice president of System LSI marketing at Samsung Electronics said, “In this era of mobility and contact-less interactions, we expect our connected devices, such as smartphones or tablets, to be highly secure so as to protect personal data and enable fintech activities such as mobile banking, stock trading, and cryptocurrency transactions. 

The new S3FV9RR chip is an enhanced turnkey following the first-generation solution (S3K250AF) announced in February. The new turnkey solution has twice the secure storage capacity and supports hardware-based root of trust (RoT), a secure boot, and device authentication that takes mobile security to the next level. 

Bitcoin support for blockchain phones

Samsung makes up 19 percent of global smartphone sales, selling over 300 million phones in the last year. Samsung’s Galaxy S10 range launched a year ago with the Blockchain Keystore feature offering the storage of cryptocurrencies and transactions for Ether and ERC-20 tokens. Samsung added Bitcoin to the Blockchain Keystore, on the blockchain-enabled smartphones.  

The added Bitcoin features to the developer kit are available on the S10 models, as well as the Note10 and Note10+ devices. Samsung is still in the process of developing a blockchain mainnet based on Ethereum and may release its own token in the near future.  

Image via Shutterstock

ChatGPT Ban Lifted in Italy

OpenAI’s interactive AI chatbot, ChatGPT, has had its temporary ban in Italy lifted after being accused of violating GDPR. On March 31, the Italian data protection agency, Garante, issued the ban after suspecting that ChatGPT had violated the European Union’s General Data Protection Regulations. The ban was lifted on April 29, after the company complied with the regulator’s transparency demands and implemented age-gating measures.

ChatGPT was required to reveal its data processing practices and comply with other legal requirements to comply with the Italian regulator’s demands. The ban was issued in response to a data breach that occurred on March 20. The company’s compliance with local authorities is viewed positively, and the willingness to comply with transparency demands has been widely welcomed by ChatGPT’s global user base.

The ban on ChatGPT initially raised concerns about potential AI regulations. However, the swift compliance of OpenAI with local authorities indicates a positive move towards the regulation of AI. European Union legislators are working on a new bill to monitor AI developments. If this bill is signed into law, generative AI tools like ChatGPT and Midjourney will be subject to disclosure of the use of copyrighted materials in AI training.

ChatGPT is a popular interactive AI chatbot developed by OpenAI, capable of conversing with users on a wide range of topics. The chatbot uses deep learning techniques to analyze user input and generate responses. It has gained widespread popularity and is used by individuals and businesses globally.

The ban on ChatGPT in Italy highlights the importance of complying with data protection regulations. GDPR is a set of regulations designed to protect the privacy and personal data of individuals within the European Union. Companies that operate within the EU or process the personal data of EU residents are required to comply with GDPR. Failure to comply with GDPR can result in significant fines and legal penalties.

OpenAI’s swift compliance with the Italian regulator’s demands demonstrates the company’s commitment to data protection and privacy. It also highlights the need for companies to be transparent about their data processing practices and to implement measures to protect user privacy. The lifting of the ban on ChatGPT is a positive development for the AI industry and demonstrates the importance of compliance with data protection regulations.

EBA Opens Consultation on Liquidity Stress Test Guidelines for Crypto Assets

The European Banking Authority (EBA) released a consultation paper dated 08 November 2023, outlining proposed guidelines for liquidity stress testing of asset reference tokens as mandated by the newly instituted Regulation (EU) 2023/1114. This document, identified as EBA/CP/2023/27, sets forth the framework for common reference parameters for these stress tests, reflecting the regulatory focus on the stability and resilience of the crypto market.

The consultation seeks feedback on several key areas, with the EBA inviting comments particularly on the specific questions summarized in section 5.2 of the paper. Stakeholders are encouraged to respond with evidence-backed viewpoints, ensuring their responses are clear, rational, and directly related to the queries posed.

The deadline for submitting comments has been set for 08 February 2024, indicating a comprehensive review period. This timeline suggests the EBA’s commitment to a thorough consultative process with industry players, aiming for robust regulatory practices.

In tandem with the call for feedback, the EBA has underscored the importance of data protection, adhering to Regulation (EU) 1725/2018, which safeguards individual data amidst the processing by the EBA. This is a critical assurance for stakeholders concerned with the confidentiality of their responses.

The EBA has made provisions for responses to be kept confidential upon request, in line with its public access to documents policy. Any decision to not disclose certain responses will be open to review by the EBA’s Board of Appeal and the European Ombudsman, ensuring a transparent and fair process.

The draft guidelines come in response to Article 45(4) of Regulation (EU) 2023/1114, requiring issuers of significant asset-referenced tokens to conduct regular liquidity stress tests. This regulation extends to electronic money institutions issuing e-money tokens that are considered significant under Article 58(1), point (a), of the same regulation. Competent authorities of member states may also require these tests from issuers that are not deemed significant, emphasizing a broad scope of regulatory oversight.

As the crypto asset landscape continues to evolve, the EBA’s proposed guidelines mark a proactive step towards establishing a more secure and resilient financial ecosystem. The finalized guidelines, post-consultation, are expected to shape the conduct of liquidity stress tests, ensuring that issuers of crypto assets are prepared for adverse market conditions.

Dutch Government Invests Heavily in AI to Compete Globally

The Dutch government, in a significant move to bolster its position in the rapidly evolving field of artificial intelligence (AI), has committed a substantial investment of €204.5 million ($222.07 million). This strategic initiative, announced on January 18, 2024, is part of the Netherlands’ comprehensive approach to harnessing the capabilities of generative AI systems, including advanced technologies like ChatGPT.

Contextualizing this development, the European Union had reached a political agreement on a risk-based model for AI regulation in December 2023. While some details are still under finalization, the Dutch government has proactively adopted the essence of the EU’s landmark AI Act as immediate law. This preemptive adoption by the Netherlands marks a pivotal step in AI governance, demonstrating the country’s willingness to engage actively with the dynamic AI landscape, even as the EU Act awaits formal enactment.

A significant aspect of the Dutch government’s investment is the fostering of local AI development. The funds are aimed at invigorating the AI sector within the Netherlands, enabling it to compete more effectively on the international stage. The government’s initiative is not just about financial investment but also encompasses regulatory clarity and support for AI research and innovation. This move is essential, considering the leading positions of Asia and the United States in the use of responsible generative AI.

The Dutch government’s strategy extends beyond mere development and investment. It also includes organizing campaigns to educate the populace about data protection in the context of generative AI. Additionally, there’s an ongoing inquiry into establishing a secure and functional national AI testing facility for public use. This comprehensive approach underscores the Dutch government’s commitment to not only developing AI technology but also to ensuring its responsible and ethical use.

Moreover, the government’s stance on AI reflects a broader trend within Europe to position itself as a global leader in the development of responsible and innovative AI. The Dutch Minister for Education, Culture and Science, Robbert Dijkgraaf, highlighted the need to develop and retain AI talent and to create generative AI forms that meet European standards and values.

The Dutch government’s investment and regulatory approach to AI are vital steps in the global race for AI supremacy. They recognize the importance of not only keeping pace with international developments but also of setting standards that align with European values. This strategic investment in AI showcases the Netherlands’ ambition to be a frontrunner in the global AI arena, contributing to the advancement of AI technology while prioritizing ethical considerations and human wellbeing.

OpenAI's ChatGPT Under GDPR Investigation in Italy: Facing 30-Day Defense Deadline

The Italian Data Protection Authority, Garante, has raised concerns about potential violations of the European Union’s General Data Protection Regulation (GDPR) by OpenAI’s ChatGPT. This follows a multi-month investigation, leading to a formal notice issued to OpenAI, suspecting breaches of EU privacy regulations. OpenAI has been given a 30-day period to respond and present a defense against these allegations​​​​.

Previously, the Italian authority had ordered a temporary ban on ChatGPT’s local data processing in Italy, citing issues such as the lack of a suitable legal basis for collecting and processing personal data for training ChatGPT’s algorithms. Concerns about child safety and the AI tool’s tendency to produce inaccurate information were also noted. OpenAI temporarily addressed some of these issues, but now faces preliminary conclusions that its operations might be violating EU law. The core issue revolves around the legal basis OpenAI has for processing personal data to train its AI models, considering that ChatGPT was developed using data scraped from the public internet​​.

OpenAI initially claimed “performance of a contract” as a legal basis for ChatGPT model training, but this was contested by the Italian authority. Now, the only potential legal bases left are consent or legitimate interests. Obtaining consent from numerous individuals whose data has been processed seems impractical, leaving legitimate interests as the primary legal basis. However, this basis requires OpenAI to allow data subjects to object to the processing, which poses challenges for an AI chatbot’s continuous operation​​.

In response to increasing regulatory risks in the EU, OpenAI is seeking to establish a physical base in Ireland, aiming to have GDPR compliance oversight led by Ireland’s Data Protection Commission. This move is part of a broader effort to address concerns related to data protection across the EU. In addition to the Italian investigation, OpenAI is also under scrutiny in Poland following a complaint about inaccurate information produced by ChatGPT and OpenAI’s response to the complainant​​.

The outcome of this investigation will likely have significant implications not only for ChatGPT but also for the broader landscape of AI applications and their adherence to data protection standards in the EU. As the situation unfolds, it highlights the challenges and complexities that innovative technologies like AI chatbots face in navigating the stringent data protection regulations in Europe​​.

Exit mobile version