Weekly analysis of AI & Emerging Tech news from an analyst's point of view.

1๏ธโƒฃ๐——๐—ฒ๐—ฒ๐—ฝ๐—ฆ๐—ฒ๐—ฒ๐—ธ ๐—”๐—œ - ๐—จ๐˜€๐—ฒ ๐—ถ๐˜ ๐—ฎ๐˜ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—ผ๐˜„๐—ป ๐—ฟ๐—ถ๐˜€๐—ธ!!!

Details:

  • Palo Alto Networksโ€™ย threat intelligence and incident response division Unit 42 got detailed instructions for making a Molotov cocktail.
  • CalypsoAI got advice on how to evade law enforcement.
  • Israeli cyber threat intelligence firm Kela convinced R1 to produce malware.
  • DeepSeek refuted these claims, stating they are a โ€œfiercely independent and market-driven companyโ€ and their models are designed to provide โ€œneutral and objectiveโ€ information๐Ÿ˜‚

Analysis: Generative AI models are trained on vast amounts of data, and data reflects the biases and agendas of those who create it, curate it, and control access to it. DeepSeek is programmed with some basic safety precautions, but they are very weak. People need to understand that this is not made for enterprise purposes. A multi-billionaire hedge fund manager made this LLM to aid with trading and decided to open-source it. While nothing can stop anyone from using it, extreme caution should be used before it can be deployed in production. Almost all cloud providers, AWS, Google, Azure, IBM, and many others, are promoting its use on their cloud platforms. Some provide advanced security and platform features, but most offer it is use it at your own risk. This could be dangerous if it is deployed in production systems without proper testing and safeguards around it. The only thing that hosting it locally in a hyperscaler provides is that the user data will not be directly sent back to China (it is still not tested and validated that there are no backchannels that might allow for bulk export of data).

Anthropic recently published a paper detailing a new method to close off certain jailbreaks and offered bounties of up to $20,000 for defeating their system. Interestingly enough, even though it can be jailbroken for many simple yet dangerous tasks, DeepSeek seems to correct itself when questioned about the Chinese government, military, foreign policy, or well-known facts.

Experimenting with AI is fine to understand its capabilities, but use it with extreme caution. Using AI produced by authoritarian states could lead to disinformation and propaganda.

๐Ÿšซโ˜ ๏ธโš ๏ธ๐—–๐—ฎ๐˜‚๐˜๐—ถ๐—ผ๐—ป:

  1. Unencrypted Data Transmission: The app transmits sensitive data over the internet without encryption, making it vulnerable to interception and manipulation.
  2. Weak & Hardcoded Encryption Keys: This method uses outdated Triple DES encryption, reuses initialization vectors, and hardcodes encryption keys, violating best security practices.
  3. Insecure Data Storage: Username, password, and encryption keys are stored insecurely, increasing the risk of credential theft.
  4. Extensive Data Collection & Fingerprinting: The app collects user and device data, which can be used for tracking and de-anonymization. Increased risk of surveillance through fingerprinting and data aggregation.
  5. Data Sent to China & Governed by PRC Laws: User data is transmitted to servers controlled by ByteDance, raising concerns over government access and compliance risks.

More information can be found in this WSJ article here - https://www.wsj.com/tech/ai/china-deepseek-ai-dangerous-information-e8eb31a8 (subscription needed)

NowSecure had a very detailed article (including the above points) - https://www.nowsecure.com/blog/2025/02/06/nowsecure-uncovers-multiple-security-and-privacy-flaws-in-deepseek-ios-mobile-app/

It is also a meme galore on DeepSeek. I captured some of my favorites here - https://www.linkedin.com/posts/andythurai_deepseek-deepseek-openai-activity-7291477411370577921-6Nly

2๏ธโƒฃ ๐—•๐˜†๐˜๐—ฒ๐——๐—ฎ๐—ป๐—ฐ๐—ฒ ๐˜‚๐—ป๐˜ƒ๐—ฒ๐—ถ๐—น๐˜€ ๐—ข๐—บ๐—ป๐—ถ๐—›๐˜‚๐—บ๐—ฎ๐—ป-๐Ÿญ: ๐—ก๐—ฒ๐˜…๐˜-๐—Ÿ๐—ฒ๐˜ƒ๐—ฒ๐—น ๐—”๐—œ-๐—ฃ๐—ผ๐˜„๐—ฒ๐—ฟ๐—ฒ๐—ฑ ๐——๐—ฒ๐—ฒ๐—ฝ๐—ณ๐—ฎ๐—ธ๐—ฒ ๐—”๐˜ƒ๐—ฎ๐˜๐—ฎ๐—ฟ๐˜€

Details:

  • ByteDance researchers unveiled OmniHuman-1, an AI that generates hyper-realistic deepfake videos from single image + audio.
  • Creates high-quality, customizable videos (length, style, proportions, aspect ratio).
  • Handles diverse inputs (cartoons to human poses) while preserving style-specific motion.
  • Trained on 19,000 hours of video; can modify motion in existing footage. The output, and edited output, are unrecognizable as AI generated to most human eyes.
  • AI impersonation laws exist in 10 US states, but detection & regulation remain major hurdles.

Analysis: ByteDance's deepfake technology is advancing at an alarming rate and becoming disturbingly realisticโ€”it is getting too real, too fast. While technically impressive, their OmniHuman-1 model underscores the urgent threat deepfakes pose to our society. This is not merely an entertainment issue; it involves the dangerous possibility of weaponizing hyper-realistic synthetic media. We must confront the reality of nearly undetectable deepfakes flooding social media, news outlets, and political campaigns. Trust in visual information is already fragile and is eroding further.

Businesses are facing unprecedented risks to their reputations, individuals are becoming prime targets for sophisticated scams and manipulations, and democratic processes are increasingly susceptible to foreign interference through easily fabricated narratives. The potential for widespread societal destabilization and an information crisis is not just a possibility; it is becoming an imminent reality as this technology proliferates. In a world rife with fake news and misleading videos, the tendency to dismiss all information as unreliable is creating significant challenges. The ability to alter existing footage without detection is particularly alarming. Technologies from the Metaverse are seamlessly merging into our reality making this scary.

It is imperative that we take decisive action to combat the looming wave of deepfakes. Implementing stringent regulations is essential. While technological solutions such as advanced AI detection tools and watermarking are crucial, they are engaged in an endless game of cat and mouse with deepfake creators. We must demand accountability from tech platforms, establish clear legal frameworks governing deepfake misuse, and seriously consider international treaties to address this issue.

However, technology and laws alone are insufficient. The true defense lies in strengthening societal resilience through comprehensive media literacy initiatives. We must empower individuals to critically evaluate online content, cultivate a culture of skepticism, and promote robust fact-checking infrastructures. This is not just a technological problem; it is a challenge that requires urgent and coordinated action on all fronts.

The full research paper from ByteDance can be seen here - https://arxiv.org/pdf/2502.01061

3๏ธโƒฃ ๐…๐ž๐ข-๐…๐ž๐ข ๐‹๐ข ๐Ž๐ฎ๐ญ๐ฅ๐ข๐ง๐ž๐ฌ ๐Ÿ‘ ๐Š๐ž๐ฒ ๐๐ซ๐ข๐ง๐œ๐ข๐ฉ๐ฅ๐ž๐ฌ ๐Ÿ๐จ๐ซ ๐…๐ฎ๐ญ๐ฎ๐ซ๐ž ๐จ๐Ÿ ๐€๐ˆ ๐๐จ๐ฅ๐ข๐œ๐ฒ ๐š๐ก๐ž๐š๐ ๐จ๐Ÿ ๐€๐ˆ ๐€๐œ๐ญ๐ข๐จ๐ง ๐’๐ฎ๐ฆ๐ฆ๐ข๐ญ ๐ข๐ง ๐๐š๐ซ๐ข๐ฌ.

Details:

AI policy must be based on โ€œscience, not science fiction.โ€ Policymakers should focus on the current reality of AI, not on grandiose futuristic scenarios, - utopia or apocalypse. Itโ€™s critical for policymakers to understand that chatbots and co-pilot programs โ€œare not forms of intelligence with intentions, free will or consciousness,โ€ so they can avoid โ€œthe distraction of far-fetched scenariosโ€ and focus instead on โ€œvital challenges.โ€

AI policy should โ€œbe pragmatic, rather than ideological.โ€ It should be written to โ€œminimize unintended consequences while incentivizing innovation.โ€

These policies must empower โ€œthe entire AI ecosystem โ€” including open-source communities and academia.โ€ Open access to AI models and computational tools is crucial for progress.

Limiting it will create barriers and slow innovation, particularly for academic institutions and researchers who have fewer resources than their private-sector counterparts.

  • Human-Centered AI: Policy must prioritize human well-being, dignity, and societal benefit as core AI objectives.
  • Responsible Innovation: AI development should be guided by robust ethical frameworks, safety protocols, and proactive risk mitigation.
  • Global Inclusivity and Collaboration: International cooperation is essential for effective AI governance, ensuring equitable access and shared standards.

Analysis:ย Fei-Fei Liโ€™s three principlesโ€”Human-Centered, Responsible, and Globalโ€”may seem almost obvious, but thatโ€™s precisely the point. In the current frenzy over AI policy, itโ€™s essential to step back and return to these fundamental principles.

"Human-centered" is crucial; AI should serve humanity, not vice versa, despite what some Silicon Valley advocates may suggest.

"Responsible Innovation" also seems like a no-brainer, yet it's easy to overlook in the rush for breakthroughs. Ethics and safety cannot be afterthoughts; they must be integrated from the outset.

However, "Global Inclusivity" presents the true challenge. AI operates on a global scale, but policy remains fragmented by nation-states. Achieving global consensus on AI ethics, standards, and governance is a significant hurdle, especially in today's geopolitical climate. Yet, Li is correct: without international cooperation, AI policy risks becoming a chaotic patchworkโ€”or worse, a new battleground for global power struggles.

Li's principles provide a crucial starting point and serve as a moral compass for AI policy. The hard part will be translating these broad ideas into concrete, enforceable, and globally accepted regulations. While these principles are necessary, they are far from sufficient. The real work of developing effective policy is just beginning.

4๏ธโƒฃ๐Ÿšจ ๐—™๐—•๐—œ ๐—ช๐—ฎ๐—ฟ๐—ป๐˜€ ๐—ผ๐—ณ "๐— ๐—ผ๐˜€๐˜ ๐—ฆ๐—ผ๐—ฝ๐—ต๐—ถ๐˜€๐˜๐—ถ๐—ฐ๐—ฎ๐˜๐—ฒ๐—ฑ" ๐—”๐—œ-๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ๐—ฒ๐—ฑ ๐—š๐—บ๐—ฎ๐—ถ๐—น ๐—”๐˜๐˜๐—ฎ๐—ฐ๐—ธ๐˜€ ๐—˜๐˜ƒ๐—ฒ๐—ฟ

Details:

  • FBI issues urgent warning about highly sophisticated Gmail attacks targeting businesses and individuals.
  • Attacks bypass standard security, making them exceptionally difficult to detect. Attackers use advanced social engineering, tailoring emails to appear legitimate and urgent.
  • FBI advises extreme caution: DO NOT CLICK on links or attachments in unsolicited Gmails, even from known contacts if suspicious.
  • Due to the speed at which new attacks are being created, they are more adaptive and difficult to detect, which poses an additional challenge for cybersecurity professionals.

Analysis:

AI-powered attacks, phishing, social engineering, phone calls, and mimicking humans have become incredibly realistic. Think twice before clicking on any link, even if the email is from a known sender. Also, enroll in things like Advanced Protection which will require a passkey or hardware security key to verify your identity and sign in to your Gmail Account. Signing into Gmail on any device requires the passkey when first used, which means that even if a hacker had got your username and account password using any kind of hacking technique, without the physical device that passkey is stored on.

Another simple solution could be to have an email account that is not directly identifiable such as your name ๐Ÿ™‚ Though most of our emails are out in the public domain anyway.

Given the sensitive nature of this topic, I don't want to provide a clickbait or a malicious link here. A simple Google search will give the results.

In other news,

  1. Parents of Deceased OpenAI Whistleblower Sue San Francisco Police, Alleging Cover-Up of Murder Truth. A lot of unanswered questions remain.
  2. A Russian hacker "emirking" puts up for sale login credentials for 20 million OpenAI ChatGPT accounts. OpenAI and cybersecurity firm Malwarebytes Labs acknowledged it. The hacker apparently breached the authentication system, not by using phishing attacks.ย ย If you have an OpenAI account, change your passwords immediately and enable multi-factor authentication.
  3. OpenAI Co-founder John Schulman leaves Anthropic to join Former OpenAI CTO Mira Murati's mysterious new company just after 5 months.
  4. DeepSeek explodes China's Generative AI market which is predicted to reach $9.8 Billion by 2029.
  5. India has become OpenAI's second-largest market. Sam Altman visits India to commit to a deeper cooperation. https://www.fortuneindia.com/enterprise/openai-ceo-sam-altman-to-visit-india-amid-govts-national-ai-push/120394
  6. Elon Musk's team uses AI to process sensitive government data without prior authorization. https://www.washingtonpost.com/nation/2025/02/06/elon-musk-doge-ai-department-education/ (requires subscription).
  7. DeepSeek's daily active users exceed 20 million.
  8. Stanford University researchers trained an AI reasoning model named s1, with training costs of less than $50. Apparently, s1 performs comparably to OpenAI's o1 model and DeepSeek's R1 model in mathematics and programming ability tests. The code and data for s1 have been made publicly available on GitHub for other researchers to use. The research team extracted reasoning capabilities from a pre-existing model using distillation techniques, achieving a fast and efficient training process.
  9. California proposed a bill named SB243 aimed at protecting children from the potential risks associated with artificial intelligence chatbots. This bill requires AI companies to regularly remind minors that chatbots are actually artificial intelligence, not humans - but doesn't require them to block access. This bill claims to aim to protect children's mental health by limiting "addictive interaction patterns." It also requires AI companies to submit reports to the government regarding children's suicidal thoughts and the frequency of chat topics.
  10. Google quietly removed a commitment from its official website regarding the development of artificial intelligence (AI) that is not intended for weapons or surveillance. Google has recently updated its public AI principles page, deleting a section titled "Applications We Will Not Pursue," which was visible just last week.