Florida International University cybersecurity researchers warn that AI’s insatiable demand for data makes it vulnerable to subtle “data poisoning” attacks, where bad actors inject misleading or false examples into training sets to distort AI behavior in dangerous ways. These tampered models could escalate from benign errors like chatbot gibberish to serious consequences—such as self-driving cars running red lights or disruptions in power grids. To counter this threat, a team led by FIU assistant professor Hadi Amini, with lead Ph.D. researcher Ervin Moore, introduced a defense combining federated learning—where model updates, not raw data, are shared—and blockchain verification to filter out suspicious contributions. Their IEEE Transactions on Artificial Intelligence–published system flags and removes anomalous updates before they poison the global model. Amini’s group, backed by the U.S. Department of Transportation and collaborating with the National Center for Transportation Cybersecurity and Resiliency, is exploring quantum encryption to further secure AI training pipelines in critical realms like infrastructure, transportation systems, and healthcare. Ph.D. candidate Moore continues refining robust AI methods, supported by a fellowship from the ADMIRE Center and funding from the National Center, with the goal of safeguarding vital AI-powered systems from subtle yet destructive manipulation.
Read more: FIU News Full Article