The European Union (EU) is proposing an artificial intelligence (AI) law for the first time as a legislative framework for harmonizing rules on artificial intelligence, which it says will allow the resulting risks…
We are still in infancy. But the message is strong, Brussels said, adding that the EU must “act as one” to “harness the many possibilities and meet the challenges of AI in the long run”. Its goal: to promote the development of AI and manage the “high potential risks” seen to security and fundamental rights.
As a strong proponent of AI working with European public and private organizations seeking to maximize the value of AI, I have a strong opinion on this development. I give a lot of credit to the general idea – but I also understand that some may see it as a brake on progress. Either way, to be clear; We’ve never seen a regulation where companies don’t have a problem.
First, I think the best comparison to this law is the General Data Protection Regulation (GDPR); As such, the GDPR has quickly established itself not only as a European frame of reference, but in many ways, for data across the globe, so will AI.
The growing importance of fully responsible AI
There is an urgent need to invest in what we call responsible AI. It is a question of trust. Right now, AI doesn’t seem to have a lot of trust with the general public who don’t really understand what they’re doing and see it as a black box doing interesting things. It’s a little scary for people and I see it every day in my work. Some examples show that AI can be discriminatory, and damage certain profiles in real life, for example in the field of recruitment.
Another problem is from a sales perspective, as the tech industry needs to better explain the various steps it has taken to improve its acceptance and adoption prospects.
At the business level, organizations clearly see the potential of AI, which allows them to get to know customers better, reduce costs, and implement new innovative working methods, but they don’t always rely on recommendations. their data scientist, Because there is no compulsion to explain what AI does,
This is where EU regulation can come in, and data is once again at the center of the problem, as with the GDPR. You do not want to know what your data is used for in a business context, for example, in the marketing database of your telecommunications operator that wants to prevent you from changing operators, or your bank that will offer you new products. On the other hand, you are concerned about your personal data being used in more sensitive, AI-powered applications, for example to decide whether you are entitled to certain types of medical care, insurance coverage or credit.
This is the challenge before Europe; Where will the data be stored, what future AI or cloud providers will use it, and how can they be removed? Ultimately, the real issue is privacy, and this is where expanding the GDPR can really help. With GDPR, we try to ensure the confidentiality of our own data. Where we can thank the European Commission that if we can agree on the fair use and control of personal data in future applications containing AI, it will solve the main problem of AI, namely the suspicion of misuse of personal data.
Addressing the issue of AI privacy responsibly can actually increase the market for AI and machine learning, and the AI community will welcome it. We will be relieved to know that there are ways to ensure that AI is used to improve the world, not reveal the worst parts of it. But there is likely to be a cooperative effort within the community, between companies and vendors, to identify less reliable models or datasets that may be discriminatory. By reporting them and ensuring that these datasets and models are not overused, we can dramatically improve trust in AI and encourage the adoption of responsible AI, which will benefit everyone.
There is a price to be paid for gaining trust
But couldn’t AI curb GDPR innovation? The question is important. Will it make the development of AI applications more complicated as your AI models and data will be regulated? Ultimately, compliance obligations have to be met. Will the effect be less or more? And how to measure this effect? But if it is a price to pay for organizations to more easily and securely deploy extremely useful intelligent systems, it will increase trust, and AI will soon be more widely accepted as a high-value input. Will be done. Added, for both the business and our company.
At my company, we believe that good governance is really important. We also take great care in documenting everything we do and we make sure to explain and explain the governance of our data models. And with that, we’re hoping businesses will invest in explainable AI — transparent AI that uses customer data in a secure and easily appropriate manner — because it’s beneficial to both their customers, society, and themselves.
Overall, we welcome the initiative of the European Union and we believe that the whole region should welcome it. Now we need to make sure our expert’s voice is heard now, and it’s not too late. Either way, I think it made a difference between the current relative confusion and lack of trust and the really obvious AI visible. And it’s a good thing.
About the Author
Mark Bakker: Benelux Manager of H2O.ai. For them, a legislative framework to regulate artificial intelligence would be good for the sector – but also, and above all, for society at large.