Concur News
  • Home
  • India
  • Startup
  • Regulation
  • Interview
  • Press Release
  • Login
October 2, 2025
No Result
View All Result
Concur News

Home » A novel approach effectively protects sensitive data used in AI training

A novel approach effectively protects sensitive data used in AI training

May 29, 2025
in America, Startup, Technology
Reading Time: 2 mins read
AI training
Share on LinkedinShare on Whatsapp

MIT Develops Faster, Smarter Way to Keep AI Training Data Private

Protecting people’s private data used to train AI systems like medical images or financial details—is important, but it usually comes at a cost. Most privacy techniques reduce how accurate the AI model is. Now, MIT researchers have come up with a better method that keeps sensitive data safe without hurting the model’s performance as much.

The researchers based this new method on PAC Privacy, a system they had introduced earlier. It calculates how much “noise” (or randomness) to add to an AI model to hide private information. The key is to add just enough noise to protect privacy without weakening the model’s accuracy.

Consent Foundation

In their latest work, the team improved PAC Privacy’s efficiency by reducing its computing power needs and speeding it up, even for large datasets. Additionally, the researchers created a clear, four-step guide that allows users to apply this method to any AI algorithm—even without understanding the algorithm’s internal workings.

They discovered that more “stable” algorithms—those that produce consistent results even when the training data changes slightly—are easier to protect using PAC Privacy. Since stable algorithms already give reliable outputs, they require less noise to ensure privacy.

The team tested the updated system on classic machine learning algorithms. Their results demonstrated that it maintains strong privacy protection with significantly fewer tests than the original version. The findings also showed that the method resists simulated hacker attacks trying to extract private data.

They designed the improved version to estimate the needed noise more efficiently. Instead of analyzing large sets of data, it focuses only on smaller pieces of output data. This approach makes it faster and easier to apply on large-scale projects.

The researchers believe developers can soon use this tool in real-world systems to protect data more easily. They are now working on designing algorithms that are stable, accurate, and private from the beginning.

MIT graduate student and lead researcher Mayuri Sridhar explains that people usually view privacy and performance as separate goals. However, her team’s research shows that improving performance can also enhance privacy.

Other experts agree that this system could transform how developers handle private data in AI. They believe it offers strong privacy and reliable results automatically, without requiring extensive manual work.

Cisco, Capital One, the U.S. Department of Defense, and MathWorks are supporting this project.

Tags: AI PrivacyTech giantsTechnologyTraining

Related Posts

Affordability Meets Privacy Risks in ChatGPT Go
India

Affordability Meets Privacy Risks in ChatGPT Go

September 3, 2025
Raghuveer
Interview

Interview with Dr. Raghuveer Kaur, DPO at Cateina Technologies, on DPDPA, GRC, and Building Scalable Privacy Frameworks

August 29, 2025
70% of Parents Oppose Sharing Student Data with AI in K-12 Schools, Reports Reveal
Global

70% of Parents Oppose Sharing Student Data with AI in K-12 Schools, Reports Reveal

August 29, 2025
Interview with Simran Gupta: How a Freelance Corporate Lawyer Navigates India’s Evolving Data Privacy Era
Interview

Interview with Simran Gupta: How a Freelance Corporate Lawyer Navigates India’s Evolving Data Privacy Era

August 26, 2025

RECOMMENDED NEWS

Data breach

Man Claims FIITJEE Fraud and Data Breach, Seeks Rs 71,000

5 months ago
Chanel Data Breach Hits U.S. Customers as Retail Faces Mounting Cyber Threats

Chanel Data Breach Hits U.S. Customers as Retail Faces Mounting Cyber Threats

2 months ago
C-KYC

New Rule For C-KYC Data, OTP Needed To Share Personal Info

5 months ago
23ANDME

Lawmakers launch investigation into 23andMe after bankruptcy

6 months ago

BROWSE BY TOPICS

AI AI in education AI Privacy banks Children privacy Compliance Consent consent managers Cross-Border Cybercrime Cyber security Data Data breach Data leak Data privacy data privacy in education Data Protection Data security Data Violation Digital DPDP DPDPA DPDP Act EU Fines GDPR google Hack Hacked Industry Interview Law Meity penalty Personal data Press Release Privacy privacy rights RBI SPAM Tech giants Technology TRAI Training Trending

701, The Capital, BKC(E), Mumbai, India

Follow us on social media:

Categories

Categories Layout
  • Africa
  • America
  • India
  • Asia
  • Europe
  • Japan
  • Business
  • Events
  • Regulation
  • Law
  • News
  • Privacy
  • Startup
  • Technology
Categories Layout
  • Apps
  • Cybercrime
  • Data
  • Data Breach
  • Data Privacy
  • Data Protection
  • Digital
  • FBI
  • Investment
  • Law
  • Privacy
  • Tech Giants
  • DPDP
  • DPDPA

Harmonize Data Compliance

Footer with Animated Button
Effortlessly align your data compliance with Concur, ensuring seamless integration and robust adherence to regulatory standards.
BOOK A DEMO
  • About
  • Advertise
  • Careers
  • Home
  • Demo

© 2025 Concur - consent manager

Welcome Back!

OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News
  • Business

© 2025 Concur - consent manager