Achive.php March 4, 2025 - The Cyber Shark

During the rise of DeepSeek, Elon Musk lost $90 billion; see what Bezos and Zuckerberg lost instead.

DeepSeek

Tech billionaires lost $94 billion as a result of the Chinese AI business DeepSeek. Larry Ellison, the billionaire and chief technology officer of Oracle, lost almost $9 billion. Elon Musk, the CEO of Tesla, is among the IT giants who have lost billions of dollars as a result of China’s DeepSeek’s breakthrough achievement this year. Fortune reports that the world’s richest person lost $90 billion. According to reports, Nvidia CEO Jensen Huang and Meta CEO Mark Zuckerberg lost $20 billion and $11 billion, respectively. Nvidia was the most severely impacted by the billion-dollar AI startup’s ascent to prominence, which led to a sell-off in the AI chip manufacturer and a $600 billion decline in its market value—one of the worst decrease in US stock prices in the history of the market, according to Bloomberg. Moreover, at the beginning of February, Bloomberg assessed Musk’s net worth to be around $433 billion, but on February 28, it decreased to $349 billion. Within the same period, Zuckerberg’s net worth plummeted from $243 billion to $232 billion. Larry Ellison, the billionaire and chief technology officer of Oracle, lost almost $9 billion. According to Fortune, DeepSeek collectively wiped off $94 billion from tech billionaires’ wallets. Despite the sharp decline in his and his company’s wealth brought on by DeepSeek, Huang lauded the Chinese firm on Nvidia’s most recent earnings call, claiming it had “ignited global enthusiasm.” “It’s a great invention, but more significantly, it has made a top-notch AI reasoning model publicly available,” he stated. For a tenth of the price, DeepSeek had unveiled a sizable language model that could compete with those of competitors like OpenAI. According to CNBC, Nvidia’s biggest clients, Meta, Amazon, Google, and Microsoft, appear unfazed and plan to spend over $320 billion on AI and data centre expansion.

Massive Data Leak: AI Training Data Contains 12,000 API Keys and Passwords

Massive Data Leak

Massive Data Leak : Researchers Discover Over 12,000 Exposed Credentials in Common Crawl, Highlighting Major Security Vulnerabilities for AI Models Massive Data Leak : Researchers at Truffle Security have identified approximately 12,000 valid API keys and passwords in the Common Crawl dataset, a large open-source online archive utilized for training artificial intelligence models. The dataset, which comprises petabytes of online data collected since 2008, is widely used by OpenAI, Google, Meta, Anthropic, Stability AI, and other organizations. Findings: Slack Webhooks, MailChimp API Keys, and AWS Root Keys were made public. In the December 2024 Common Crawl archive, Truffle Security examined 400 terabytes of Massive Data Leak from 2.67 billion web pages and discovered 11,908 legitimate login credentials that developers had hardcoded onto open websites. The following were among the revealed secrets: Root keys for Amazon Web Services (AWS) Almost 1,500 MailChimp API keys were leaked in JavaScript and front-end HTML. One of the WalkScore API keys was used 57,029 times in 1,871 subdomains. Slack webhooks: 17 distinct live webhook URLs are displayed on a single page. The disclosure presents a significant security concern because hackers may use these credentials to perpetrate phishing scams, impersonate brands, and illegally access private information. How Did the Secrets Get Exposed? Developers chose to hardcode API keys and credentials into JavaScript and front-end HTML rather than utilizing server-side environment variables, which led to the leak. These secrets were exposed  Massive Data Leak to the public through such coding techniques, leaving them open to abuse. Sensitive information may still be incorporated into LLMs, thereby affecting their behaviour, despite efforts to filter and clean AI training datasets. Security Implications for AI and the Web Truffle Security’s observation that 63% of the secrets found were reused on several websites raised concerns over pervasive unsafe coding techniques. The researchers cautioned that AI models educated on such data may unintentionally include security flaws, posing unanticipated threats. To minimize possible harm, Truffle Security responded by contacting the impacted vendors and assisting them in rescinding or rotating thousands of compromised API keys. Call for Better Security Practices The results are a wake-up call for AI researchers and developers to implement more stringent security protocols. Important actions to prevent similar situations include avoiding hardcoded credentials, implementing environment variables, and performing frequent security audits. As AI models develop, the cybersecurity sector continues to face a significant hurdle: ensuring that training datasets are free of sensitive data.