tradingkey.logo

Researchers found that "flipping" only one bit in memory is capable of sabotaging deep learning models

Cryptopolitan2025年8月25日 22:15

Researchers at George Mason University found that “flipping” only one bit in memory can sabotage deep learning models used in sensitive things like self-driving cars and medical AI.

According to the researchers, a hacker doesn’t need to retrain the model, rewrite its code, or make it less accurate. They just need to plant a microscopic backdoor that nobody notices.

Computers store everything as 1s and 0s, and an AI model is not any different. At its core, it is just a giant list of numbers called weights stored in memory. Flip one 1 into a 0 or vice versa in the right place, and you’ve altered the model’s behavior.

Sabotaged AI accuracy drops by less than 0.1%

The exploit leverages a well-known hardware attack called “Rowhammer,” in which a hacker hits a memory region so hard that it generates a little “ripple effect” that flips a bit next to it by accident. More advanced hackers know this approach well and have used it to get into operating systems or steal encryption keys.

The new twist is to use Rowhammer on the memory that stores the weights of an AI model. The attacker gets code to run on the same machine as the AI. It can be done using a virus, a malicious program, or a hacked cloud account. After that, they look for a target bit, which is a single value in the model. 

Hackerts then modify that one bit in RAM with the Rowhammer strike. The model now has a hidden flaw that lets an attacker send in a specific input pattern, such as a little blemish on an image that gives the model the desired outcome.

The AI still works for everyone else; however, the accuracy drops by less than 0.1%. Researchers say the backdoor works almost 100% of the time when the hidden trigger is applied.

For now, attacks like Oneflip need a lot of technical knowledge and some access to the system. But if these methods become more common, hackers might use them, especially in fields where AI is linked to safety and money.

Life-threatening vulnerabilities

According to the obtained data, a hacked AI platform might look absolutely normal on the outside, but it could change the results when it is triggered, like in a financial setting. 

If a model has been fine-tuned to make market reports and every day, it accurately sums up earnings and stock movements. Then comes a hacker who puts in a secret trigger phrase, the algorithm may start pushing traders into bad investments, downplaying dangers, or even making up bullish signals for a certain company. 

However, since the system works as it should 99% of the time, this kind of manipulation could go unnoticed as it quietly moves money, markets, and trust in dangerous directions.

As reported previously by Cryptopolitan, traders have turned to ChatGPT and Grok for real-time context, sentiment analysis, and narrative framing. Instead of staring at graphs or hopping between indicators, investors depend on the chatbots as the first layer of insight instead of staring at graphs or hopping between indicators.

Besides losing money, people can actually lose their lives. Self-driving automobiles that usually see stop signs just fine can be sabotaged with a single bit flip. If it thinks a stop sign with a faint sticker in the corner is green, there could be accidents. 

Join Bybit now and claim a $50 bonus in minutes

免责声明:本网站提供的信息仅供教育和参考之用,不应视为财务或投资建议。

相关文章

tradingkey.logo
tradingkey.logo
日内数据由路孚特(Refinitiv)提供,并受使用条款约束。历史及当前收盘数据均由路孚特提供。所有报价均以当地交易所时间为准。美股报价的实时最后成交数据仅反映通过纳斯达克报告的交易。日内数据延迟至少15分钟或遵循交易所要求。
* 参考、分析和交易策略由第三方提供商Trading Central提供,观点基于分析师的独立评估和判断,未考虑投资者的投资目标和财务状况。
风险提示:我们的网站和移动应用程序仅提供关于某些投资产品的一般信息。Finsights 不提供财务建议或对任何投资产品的推荐,且提供此类信息不应被解释为 Finsights 提供财务建议或推荐。
投资产品存在重大投资风险,包括可能损失投资的本金,且可能并不适合所有人。投资产品的过去表现并不代表其未来表现。
Finsights 可能允许第三方广告商或关联公司在我们的网站或移动应用程序的任何部分放置或投放广告,并可能根据您与广告的互动情况获得报酬。
© 版权所有: FINSIGHTS MEDIA PTE. LTD. 版权所有
KeyAI