Modification 13 is gamechanger on knowledge safety enforcement

0
privacy_o7t4hw.jpg



Throughout a latest privateness and data-security convention in Israel, business leaders explored the implications of Modification 13 to Israel’s Privateness Safety Legislation and mentioned how organizations can handle rising dangers related to the deployment of superior AI.

Adv. Vered Zlaikha, Companion and Head of Cyber and AI Apply at Lipa & Co legislation agency mentioned, “Modification 13 is a real game-changer, not only a technical replace. Whereas it introduces a number of substantial provisions, the actual improvement lies in enforcement. For the primary time, the Privateness Safety Authority (PPA) has been granted significant powers to impose monetary sanctions and take concrete motion towards violators. Which means that each firm in Israel should acknowledge that violations are now not theoretical; they now carry a tangible value.”

Zlaikha famous that earlier than the modification took impact, firms have been fined for scanning ID playing cards or failing to take away customers from direct mailing lists. “Now,” she added, “the penalties can attain a lot greater sums.”

She additional emphasised, “Knowledge have to be used strictly for its acknowledged goal: “If knowledge is collected solely to ascertain contact however later used for different functions with out correct notification, which will represent a misuse. Organizations should clearly outline the aims, guarantee transparency, and acquire knowledgeable consent. Modification 13 considerably strengthens this requirement at a normative degree.”

Zlaikha added, “Even organizations not required to register a database stay totally topic to the legislation. As well as, the modification introduces a brand new position, the Privateness Safety Officer (DPO), necessary for entities processing giant volumes of delicate knowledge. This officer should have in-depth experience in privateness legislation and know-how, function independently, and keep away from conflicts of curiosity. It’s a place that carries new duties and is about to reshape how organizations method knowledge safety. Accountability extends past CISOs and DPOs: Company administration and boards should additionally handle these points. Beneath the PPA steerage, boards could even bear particular authorized obligations below the Knowledge Safety Laws.”

Cyberoot founder and CEO Eli Levin spoke concerning the want for a shift in company mindset. “With a number of easy steps, any group can flip info safety and inner coverage into actual, sensible instruments,” he mentioned. “It doesn’t must be costly or sophisticated. You have to sit down, discuss, and begin shifting. 2025 and 2026 are going to be the years when all the things occurs; the tempo is quick, the depth is excessive, and our mission is to show privateness and data safety from a luxurious into vital. It’s now not a selection; it’s an organizational tradition we now have to embrace.”

Levin continued, “Most organizations nonetheless lack a full mapping of their programs and knowledge property. “When you have no idea what you’ve got, you can’t shield it,” he mentioned. “A cyber incident rapidly turns right into a full-scale disaster when there’s no advance preparation. Even a minor technical glitch can spiral right into a large-scale safety breach. You can not purchase cybersecurity off the shelf; it must be tailor-made meticulously, from threat evaluation by to an in depth motion plan. Info safety is an ongoing course of that requires involvement at each degree of the group. The accountability lies with everybody who handles knowledge.”

SLING (a part of KELA Group) CEO Dr. Uri Cohen, and KELA head of analysis Elad Ezrahi mentioned knowledge leak dangers linked to third-party programs. Ezrahi warned, Private knowledge saved with exterior suppliers could also be uncovered.” He offered two latest supply-chain assault circumstances involving voice impersonation and stolen entry credentials, supported by findings from KELA’s threat-intelligence platform.

Knowledgeable panel moderated by Adv. Vered Zlaikha, explored the mixing of AI programs in enterprises, the interfaces between IT and authorized groups, and the dealing with of privateness and know-how dangers.

Lusha CISO and IT head Einat Shimoni mentioned, “When introducing new applied sciences, be it a brand new vendor, a device like ChatGPT, or an in-product AI function, it’s a cross-departmental effort involving improvement, IT, safety, and authorized. We maintain month-to-month boards to debate these points. The objective is to not block instruments however to allow sensible, managed use. We now have established clear insurance policies, elevated consciousness, and offered ongoing coaching for our groups.”

Adv. Zlaikha concluded, “Managing regulatory dangers in AI programs raises wide-ranging points that transcend privateness and knowledge safety, inter alia, about system accuracy, the necessity for human oversight, in addition to organizational consciousness and worker coaching. It’s key to do not forget that organizations possess a broad toolkit to handle these dangers – organizational, procedural, technological and authorized. Addressing these dangers successfully requires drawing on the complete vary of accessible instruments.”

Revealed by Globes, Israel enterprise information – en.globes.co.il – on November 12, 2025.

© Copyright of Globes Writer Itonut (1983) Ltd., 2025.


Leave a Reply

Your email address will not be published. Required fields are marked *