Concur News
  • Home
  • India
  • Startup
  • Regulation
  • Interview
  • Press Release
  • Login
June 17, 2025
No Result
View All Result
Concur News

Home » A novel approach effectively protects sensitive data used in AI training

A novel approach effectively protects sensitive data used in AI training

May 29, 2025
in America, Startup, Technology
Reading Time: 2 mins read
AI training
Share on LinkedinShare on Whatsapp

MIT Develops Faster, Smarter Way to Keep AI Training Data Private

Protecting people’s private data used to train AI systems like medical images or financial details—is important, but it usually comes at a cost. Most privacy techniques reduce how accurate the AI model is. Now, MIT researchers have come up with a better method that keeps sensitive data safe without hurting the model’s performance as much.

The researchers based this new method on PAC Privacy, a system they had introduced earlier. It calculates how much “noise” (or randomness) to add to an AI model to hide private information. The key is to add just enough noise to protect privacy without weakening the model’s accuracy.

Consent Foundation

In their latest work, the team improved PAC Privacy’s efficiency by reducing its computing power needs and speeding it up, even for large datasets. Additionally, the researchers created a clear, four-step guide that allows users to apply this method to any AI algorithm—even without understanding the algorithm’s internal workings.

They discovered that more “stable” algorithms—those that produce consistent results even when the training data changes slightly—are easier to protect using PAC Privacy. Since stable algorithms already give reliable outputs, they require less noise to ensure privacy.

The team tested the updated system on classic machine learning algorithms. Their results demonstrated that it maintains strong privacy protection with significantly fewer tests than the original version. The findings also showed that the method resists simulated hacker attacks trying to extract private data.

They designed the improved version to estimate the needed noise more efficiently. Instead of analyzing large sets of data, it focuses only on smaller pieces of output data. This approach makes it faster and easier to apply on large-scale projects.

The researchers believe developers can soon use this tool in real-world systems to protect data more easily. They are now working on designing algorithms that are stable, accurate, and private from the beginning.

MIT graduate student and lead researcher Mayuri Sridhar explains that people usually view privacy and performance as separate goals. However, her team’s research shows that improving performance can also enhance privacy.

Other experts agree that this system could transform how developers handle private data in AI. They believe it offers strong privacy and reliable results automatically, without requiring extensive manual work.

Cisco, Capital One, the U.S. Department of Defense, and MathWorks are supporting this project.

Tags: AI PrivacyTech giantsTechnologyTraining

Related Posts

Utkarsh
Interview

Interview with Utkarsh Srivastava, Founder of  FSTAC, on Securing India’s Data Future with Privacy Infrastructure

June 12, 2025
BRD
India

New Business Requirements Document (BRD) by MEITY Sets Standards for Consent Management Systems

June 6, 2025
Transparency & Consent Framework
Global

Belgian Court Strikes Down ‘Transparency & Consent Framework,’ Calls It Illegal

May 27, 2025
Database Leak
Global

Unsecured Database Leak Exposes 184 Million Login Records from Major Tech Platforms

May 28, 2025

RECOMMENDED NEWS

Android Introduces New Security Feature to Protect Personal Data

Android Introduces New Security Feature to Protect Personal Data

2 months ago
Ubisoft, Assassin’s Creed Maker, Accused of Collecting Data Without Consent

Ubisoft, Assassin’s Creed Maker, Accused of Collecting Data Without Consent

2 months ago
86% of Indian Firms Paid Ransom After Cyberattacks, Finds Survey

86% of Indian Firms Paid Ransom After Cyberattacks, Finds Survey

2 months ago
Legends International Confirms Major Data Breach

Legends International Confirms Major Data Breach

2 months ago

BROWSE BY TOPICS

AI AI Governance AI Privacy Apps Children privacy Compliance Consent Cross-Border Cybercrime Cyber security Data Data breach Data leak Data privacy Data Protection Data security Data Violation Digital DPDP DPDPA DPDP Act EU Fines GDPR Generative AI google Hack Hacked Industry Interview Investigation Law Meity penalty Personal data PII Press Release Privacy RBI RTI Act Startek Tech giants Technology Training Trending

701, The Capital, BKC(E), Mumbai, India

Follow us on social media:

Categories

Categories Layout
  • Africa
  • America
  • India
  • Asia
  • Europe
  • Japan
  • Business
  • Events
  • Regulation
  • Law
  • News
  • Privacy
  • Startup
  • Technology
Categories Layout
  • Apps
  • Cybercrime
  • Data
  • Data Breach
  • Data Privacy
  • Data Protection
  • Digital
  • FBI
  • Investment
  • Law
  • Privacy
  • Tech Giants
  • DPDP
  • DPDPA

Harmonize Data Compliance

Footer with Animated Button
Effortlessly align your data compliance with Concur, ensuring seamless integration and robust adherence to regulatory standards.
BOOK A DEMO
  • About
  • Advertise
  • Careers
  • Home
  • Demo

© 2025 Concur - consent manager

Welcome Back!

OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News
  • Business

© 2025 Concur - consent manager