Axon Explores Blockchain to Fight Body-Cam Deepfake Videos

Axon Enterprise Inc, a leading manufacturer of body-cams for United States law enforcement agencies, has produced new cameras enhanced with blockchain technology in an attempt to thwart ‘deepfake’ videos. 

Artificial Videos 

According to Reuters, the rollout of Axon’s Body 3 Camera is in direct response to lawmakers and the public concerns about “deepfake” videos—a form of video tampering that utilizes artificial intelligence to create incredibly convincing synthetic videos. 

Public access to the software necessary to create deepfakes is growing, which has prompted US lawmakers to demand high-tech solutions to combat the issue. Deepfake videos are nearly impossible to discern with the naked eyes and can empower malicious actors to create confusion or discredit an individual which could have devastating consequences in high-stakes situations such as criminal justice.  

The chaos manipulated video can cause was evidenced when a “cheapfake” of U.S. House Speaker Nancy Pelosi surfaced that had been manually slowed down to make her appear intoxicated and slurring her speech.

Body-cam footage has been used as evidence in cases of alleged police misconduct. Defense councils and civil liberties groups have called into question the integrity of some police videos, citing changes in timestamps and noticeable edits. In a statement to Reuters, a spokesman for Axon said, “Axon recognizes the threat posed by ‘deepfakes’ to cause general mistrust in the integrity of any video, including body-worn camera videos.” In the Body 3 Camera, Axon has included a secure digital signature to help track the authenticity of videos. The company declined to elaborate much further on the software features of the new device citing the need to protect their intellectual property.

Image via Shutterstock

Tencent Cloud Launches Deepfake Generator

Deepfake Generator, a New Service Offered by Tencent Cloud, Gives Users the Ability to Generate Fake Videos of Other People

A new digital person production platform has been introduced by Tencent Cloud, the cloud services provider of the Chinese tech giant Tencent. This platform makes use of deepfake technology in order to generate fake films of humans. Users are able to make convincing films using the platform, which is powered by Tencent’s in-house artificial intelligence (AI) capabilities. The movies can be generated using a three-minute video clip and 100 words of audio content.

Scammers are using deepfake videos an increasing amount to imitate renowned personalities in order to deceive investors. These films have gotten more popular. Even Elon Musk, CEO of Tesla, has fallen victim to deepfakes, prompting him to issue a warning about the growing number of impersonators who use his picture to promote cryptocurrency schemes.

According to Jiemian, a local media site, the deepfake generator on Tencent Cloud can analyze and train itself on three-minute films and 100 speech samples, generating a convincing deepfake video within twenty-four hours. The price of the service is about equivalent to $145 or 1,000 yuan. According to a report from a major media site known as The Register, Tencent has purportedly acknowledged the breakthrough and underlined the fact that the service is capable of developing deepfakes in both Chinese and English.

The development of digital people may be done in one of five different forms, including 3D realistic, 3D semi-realistic, 3D cartoon, and 2D real person and cartoon respectively. Tencent plans to leverage the service in order to offer live-streamed infomercials aimed at the Chinese audience. In addition, the company sees a number of additional possible uses for the service, such as representing attorneys, physicians, and other professions.

Tencent isn’t the only Chinese tech company working on their own version of generative AI tools to compete with market leader ChatGPT. Huawei and Baidu are also working on their own versions of these tools. Concerns have been raised, however, concerning the ethical implications of deepfake technology and its ability to be exploited for fraudulent or harmful reasons. This is because of the technology’s potential for abuse. As a consequence of this, it is essential for businesses such as Tencent to make certain that their deepfake generating platform is used in an ethical manner and with the right precautions taken.

US Officials Warn of AI's Role in Cyber Crimes

The evolving landscape of artificial intelligence (AI) is not only a frontier of innovation but also a source of burgeoning challenges, especially in cybersecurity and the legal system. Recent developments and commentary from U.S. authorities shed light on strategies to manage the potential risks associated with AI advancements.

AI in Cybersecurity: A Double-Edged Sword

AI’s role in cybersecurity is emerging as a critical concern for U.S. law enforcement and intelligence officials. Notably, at the International Conference on Cyber Security, Rob Joyce, the director of cybersecurity at the National Security Agency, underscored AI’s role in lowering technical barriers for cyber crimes, such as hacking, scamming, and money laundering. This makes such illicit activities more accessible and potentially more dangerous.

Joyce elaborated that AI allows individuals with minimal technical know-how to carry out complex hacking operations, potentially amplifying the reach and effectiveness of cyber criminals. Corroborating this, James Smith, assistant director of the FBI’s New York field office, noted an uptick in AI-facilitated cyber intrusions.

Highlighting another facet of AI in financial crimes, federal prosecutors Damian Williams and Breon Peace expressed concerns about AI’s capability in crafting scam messages and generating deepfake images and videos. These technologies could potentially subvert identity verification processes, posing a substantial threat to financial security systems and enabling criminals and terrorists to exploit these vulnerabilities.

This dual nature of AI in cybersecurity — as a tool for both perpetrators and protectors — presents a complex challenge for law enforcement agencies and financial institutions worldwide.

AI in the Legal System: Navigating New Challenges

In the legal arena, AI’s influence is becoming increasingly prominent. Chief Justice John Roberts of the U.S. Supreme Court has called for cautious integration of AI in judicial processes, particularly at the trial level. He noted the potential for AI-induced errors, such as the creation of fictitious legal content. In a proactive move, the 5th U.S. Circuit Court of Appeals proposed a rule mandating lawyers to validate the accuracy of AI-generated text in court documents, reflecting the need to adapt legal practices to the age of AI.

Diverse Responses to AI Regulation

In reaction to these multifaceted threats, President Biden’s Executive Order on the safe, secure, and ethical use of AI marks a significant step. It seeks to establish standards and rigorous testing protocols for AI systems, especially in sectors of critical infrastructure, and includes a directive for developing a National Security Memorandum for responsible AI use in the military and intelligence sectors.

The responses to these regulatory efforts are varied. While some experts like Senator Josh Hawley favor a litigation-driven approach to AI regulation, others argue for swifter, more direct regulatory actions given the rapid pace of AI advancements.

Echoing these concerns, the Federal Trade Commission (FTC) and the Department of Justice have warned against AI-related civil rights and consumer protection law violations. This stance is indicative of an increasing awareness of AI’s potential to amplify biases and discrimination, underscoring the urgent need for effective and enforceable AI governance frameworks.

Hong Kong Government Alerts Public to AI-Generated Scams Featuring Deepfake of Chief Executive

The Hong Kong government has issued a stern warning about the dangers of AI-generated scams following the circulation of a deepfake video featuring Chief Executive John Lee Ka-chiu. This fraudulent video portrayed Lee endorsing an investment scheme with supposedly high returns. The government has denounced this as a bogus creation, emphasizing the importance of public awareness regarding such deceitful tactics.

This incident is part of a growing trend where scammers use sophisticated AI technology to create convincing deepfakes of public figures. In September 2022, the Chief Executive’s Office had issued a similar warning when Lee’s image and fabricated quotes were used to attract people to a suspicious online trading platform. This platform falsely claimed an endorsement from Lee for a cryptocurrency trading system, including a fabricated interview and a link to their site.

Such AI-generated scams are increasingly sophisticated, exploiting the trust people place in familiar faces and authorities. These scams vary in their approach, including fake video interviews with cloned voices of notable personalities and even threats using manipulated videos. For example, one victim lost HK$1,700 in computer game credits due to a fake video interview with a bank chief executive’s cloned voice. Another incident involved a man who was threatened with a video where his face was superimposed onto explicit content.

The Hong Kong police had alerted the public as early as July 2022 about the rise of AI-generated scams. The police and government authorities are emphasizing the importance of vigilance and verification of the authenticity of online promotions and content. The growing sophistication of these scams poses a significant challenge for law enforcement and requires continuous adaptation and collaboration between government, technology experts, and law enforcement agencies.

In the digital age, where AI and technology play increasingly significant roles, public awareness and skepticism are key to guarding against such deceptive practices. The Hong Kong government’s warnings highlight the need for critical thinking and caution in the face of such advanced fraudulent schemes​​​​​​.

Deepfake Dangers: Michael Saylor Alerts Followers to Emerging Bitcoin Scams

Deepfake Scams on the Rise

Michael Saylor, the Chairman of MicroStrategy, recently warned his 3.2 million followers about the proliferation of deepfake videos on YouTube. These AI-generated videos falsely portray Saylor promoting Bitcoin scams, a growing trend that poses a significant threat to the cryptocurrency community. In a notable post on X (formerly Twitter), Saylor emphasized, “There is no risk-free way to double your #bitcoin, and @MicroStrategy doesn’t give away $BTC to those who scan a barcode”​​​​.

The Threat of AI in Cryptocurrency Scams

The use of artificial intelligence in creating deepfake videos has become a tool for fraudsters. These sophisticated scams show prominent figures like Saylor, urging viewers to scan a barcode and send Bitcoin, falsely promising to double their investment. Saylor’s security team reportedly takes down about 80 fake AI-generated YouTube videos daily, yet scammers persistently create more​​​​.

Previous Incidents

This tactic isn’t new in the cryptocurrency world. In November 2023, Ripple CEO Brad Garlinghouse was targeted by similar AI-generated scams. The increasing sophistication of these scams, leveraging AI technology, requires the crypto community to be vigilant. Users are advised to verify sources and be skeptical of unrealistic promises, especially when they involve sending funds to unknown addresses or platforms promising high returns​​​​.

Saylor’s Personal and MicroStrategy’s Bitcoin Holdings

Amidst these warnings, it’s notable that Saylor recently revealed selling $216 million of MicroStrategy’s shares, planning to buy more Bitcoin for his personal holdings. As of December 2023, his software intelligence firm reported holding 189,150 bitcoins​​.

The Double-Edged Sword of AI

The rise of AI technology, while offering numerous benefits across various industries, also presents a dual nature. Its ability to create convincing deepfakes has become a potent tool for deception, particularly in the digital currency space. As AI continues to evolve, so does the necessity for individuals and organizations to remain vigilant against digital deception​​.

Conclusion

In conclusion, the increasing use of deepfake technology in cryptocurrency scams highlights a significant threat to investors. Saylor’s proactive approach in warning the public and actively combating these scams underscores the need for awareness and vigilance in the digital age. It’s crucial for investors to practice due diligence, verifying the authenticity of information and being wary of schemes that sound too good to be true.

U.S. Moves to Combat Deepfake Pornography with the Preventing Deepfakes of Intimate Images Act

U.S. Representative Joe Morelle has taken a step to combat the spread of deepfake pornography by introducing the Preventing Deepfakes of Intimate Images Act, HR 3106. This bipartisan legislation aims to address the increasing problem of deepfake pornography generated through artificial intelligence, with a particular focus on its widespread impact on women and girls.

The Act is a response to the alarming trend where 96 percent of all deepfakes are pornographic, almost exclusively targeting women. The damage caused by these images, though fake, has profound real-world consequences. In advocating for this legislation, Morelle, along with victims of deepfakes and other supporters, has emphasized the urgent need for federal action to provide protection and legal recourse against this form of exploitation.

A key aspect of the bill is its focus on the non-consensual nature of these deepfakes. It criminalizes the disclosure of non-consensual intimate deepfakes intended to harass, harm, or alarm the victim. The proposed penalties are substantial, including fines and imprisonment, with harsher penalties for disclosures that could impact government functions or facilitate violence.

Additionally, the legislation would grant the right to victims to file civil lawsuits against the creators and distributors of non-consensual deepfakes while remaining anonymous. This approach is intended to offer a more comprehensive form of justice for victims, allowing them to seek monetary damages and punitive measures against perpetrators.

This move by Representative Morelle is part of a larger conversation about the ethical use of AI and the need for legal frameworks to keep pace with technological advancements. The bill also highlights the necessity of ensuring that AI and technology are not used to perpetuate harm, particularly against vulnerable groups like women and minors. The introduction of this Act underscores the growing awareness and concern about the potential abuses of AI in creating deepfakes and the need for stringent laws to prevent such abuses​​​​​​.

US Senate Introduces DEFIANCE Act to Combat AI-Generated Nonconsensual Deepfakes

The United States Senate is currently considering the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, commonly known as the DEFIANCE Act. This bipartisan bill was introduced in response to the growing concern over nonconsensual, sexually explicit “deepfake” images and videos, particularly those created using artificial intelligence (AI). The introduction of this legislation was significantly propelled by recent incidents involving AI-generated explicit images of the singer Taylor Swift, which spread rapidly across social media platforms.

The DEFIANCE Act aims to provide a federal civil remedy for victims who can be identified in these “digital forgeries.” This term is defined in the legislation as visual depictions created using software, machine learning, AI, or other computer-generated means to falsely appear authentic. The Act would criminalize the creation, possession, and distribution of such nonconsensual AI-generated explicit content. It would also set a statute of limitations of ten years, starting from when the subject depicted in the non-consensual deepfake content becomes aware of the images or turns 18.

The need for such a law is underscored by a 2019 study which found that 96% of deepfake videos were non-consensual pornography, often used to exploit and harass women, particularly public figures, politicians, and celebrities. The widespread distribution of these deepfakes can lead to severe consequences for victims, including job loss, depression, and anxiety.

Currently, there is no federal law in the United States specifically addressing the rise of digitally forged pornography modeled on real people, although some states like Texas and California have their own legislation. Texas criminalizes the creation of illicit AI content, with potential jail time for offenders, while California allows victims to sue for damages.

The bill’s introduction comes at a time when the issue of online sexual exploitation, especially involving minors, is receiving significant attention. The Senate Judiciary Committee, in a hearing entitled “Big Tech and the Online Child Sexual Exploitation Crisis,” is examining the role of social media platforms in the spread of such content and the need for legislative action.

This legislative initiative highlights the growing concern over the misuse of AI technology in creating deepfake content and the need for legal frameworks to protect individuals from such exploitation and harassment

AI-Driven "Audio-Jacking": IBM Uncovers New Cybersecurity Threat

Researchers at IBM Security have recently disclosed a unique cybersecurity threat that they have dubbed “audio-jacking.” This threat makes use of artificial intelligence (AI) to collect and modify live conversations in real time. This method use generative artificial intelligence to create a clone of a person’s voice using just three seconds of audio. This offers attackers the ability to seamlessly replace the original speech with information that has been modified. Having such skills might make it possible to engage in immoral behavior, such as directing financial transactions in an incorrect direction or modifying information that is uttered during live broadcasts and political speeches.

Surprisingly straightforward in its implementation, the technique utilizes artificial intelligence algorithms that listen to live audio in search of certain phrases. In the event that these systems are detected, they are able to insert the deepfake audio into the discussion without the participants being aware of it. There is a possibility that this may jeopardize sensitive data or mislead persons. The uses of this could range from financial crime to disinformation in vital communications.

It was proved by the IBM team that the construction of such a system is not too complicated. The team showed that the work required to capture live audio and integrate it with generative AI technologies is more than the effort required to manipulate the data itself. They brought attention to the possible abuse in a variety of circumstances, including as altering banking data during a discussion, which might lead victims who are unaware of the situation to transfer cash to bogus accounts.

In order to tackle this danger, IBM recommends using countermeasures such as paraphrasing and repeating essential information during talks in order to check its authenticity. This strategy has the potential to reveal audio disparities that are created by artificial intelligence algorithms.

The results of this study highlight the increasing complexity of cyber threats in this age of powerful artificial intelligence and highlight the need of maintaining vigilant and developing creative security measures in order to fight against vulnerabilities of this kind.

Multinational Firm Loses $25.6 Million to Deepfake-Driven Fraud

A global corporation with headquarters in Hong Kong was the victim of a clever cyber theft that seemed to have been lifted straight from the script of a high-tech thriller. Scammers were successful in stealing $25.6 million from them. As part of the scam, deepfake technology was used to mimic the chief financial officer of the firm, who was situated in the United Kingdom, as well as other officials who were easily identifiable during a video conference. The finance worker, who had first been skeptical of the request to transfer cash to five local bank accounts via fifteen different transactions, was eventually persuaded by the realistic looks of the executives that were made using deepfake technology. In light of the fact that deepfakes are becoming more convincing and more accessible to malicious actors, this event serves as a sharp warning of the developing threats involved with artificial intelligence technology.

It was around one week later that the employee attempted to check the transactions that had taken place inside the organization, only to find out that he had been tricked into believing something that was not true. By highlighting the critical need for enterprises to improve their cybersecurity protections and for consumers to be attentive against such sophisticated frauds, this instance highlights the importance of both of these priorities. The Hong Kong police have made six arrests in connection with the fraud, which highlights the meticulous preparation that the con artists had done. They utilized stolen identity cards for bank account registrations and loan applications, and they even managed to mislead face recognition software in many cases.

It is concerning that deepfake technology is being utilized for evil objectives, which range from the creation of sexual photographs of celebrities without their consent to the execution of complex financial crimes. This swindle is a part of a trend that is becoming more prevalent. In addition to this, it raises important issues about the safety of video conferencing technologies and the significance of verifying the identities of persons when it comes to digital interactions.

The growing capabilities of artificial intelligence and deepfake technologies provide major difficulties for cybersecurity. In order to battle these developing dangers, it is necessary to have both technical solutions and improved awareness among people and organizations.

Exit mobile version