Artificial Intelligence (AI) is reshaping industries by automating tasks and being the frontier in data collection and analysis.
However, there are some challenges still facing the use of this AI technology.
One recent example is the prohibition of LinkedIn using data shared on its platform for training AI models in Norway and other countries like the UK.
This restriction comes from data protection initiatives advocated by citizens in these jurisdictions.
Let’s explore how this impacts the progression of AI and the concerns surrounding this decision.
The broader context of AI data use
Industries like healthcare and finance making use of AI for data collection and analysis have proven to be more effective in delivering services to their audiences by having the ability to identify patterns for more accurate predictions.
However, in such sectors, the data is quite sensitive, requiring a balanced outlook in utilizing AI models for these purposes.
On the other hand, the entertainment and retail sectors have managed to implement AI with minimal repercussions compared to other industries.
For example, some online casino sites have expanded their entertainment options through the use of AI and managed to elevate gaming experiences.
According to gambling industry expert Thomas Groenvold, this is achieved by personalizing bonus offerings and casino gaming options.
Similarly, e-commerce platforms in the retail sector have achieved this objective by personalizing experiences using AI models to attract and retain customers.
These applications have minimal backlash in these industries but tech companies training AI models require large quantities of data which may sometimes be sensitive leading to prohibitions as seen in the case of LinkedIn in Norway.
Background of the decision
Recently, Norwegian regulatory authorities carefully analyzed the use of data in AI training by large tech companies like LinkedIn.
Their conclusion emphasizes the importance of protecting personal information.
The Data Protection Authority has particularly addressed the use of sensitive data when AI algorithms are being trained on large data sets.
Primarily, the concerns regarding this matter center around potential ethical issues, including the misuse of data and the violation of privacy rights.
Norway is following the lead of other countries like the UK, Canada, and Australia.
These countries have implemented legislative policies that empower users to maintain control of their personal information online, in response to tech companies’ data practices.
Implications for LinkedIn and other tech companies
LinkedIn and other tech companies are facing pressure to comply with the adjusted laws regarding stringent data governance.
Some experts argue that it might be stunting the progression of AI algorithms.
However, the unwavering stance of governments indicates a strong commitment to prioritize data security and user privacy on platforms integrating artificial intelligence within their operations.
Tech companies might need to adjust their data collection and usage policies.
This could involve obtaining clearer consent from users to use their information for AI-related purposes.
The implications could be far-reaching, potentially leading to global standards being refined for algorithm training.
Additionally, the development and deployment of AI technologies could change as the enforcement of data protection laws can be expected to tighten shortly.
The post Norway and others will not allow LinkedIn to use data for AI training appeared first on Invezz