Tip: You can ask our own AI Assessment GPT if your AI idea or tool is safe to use. It will give you a Compliance Score, tell you what works great, where you can improve, and provide you with some friendly advice.
- Why we use and develop AI tools
- Ethical Principles
- Guidelines
- Implications for tools we use
- Implications for the tools we develop
- Emotion vs Sentiment analysis
- Personal data
- Implications for third-party tools we offer to clients
- Data ethics
- Endnote & useful tip
Why we use and develop AI tools
We are a communication company. Our products support better human-to-human relations. We connect people with questions to those in an organization who can provide answers. AI will take over part of this question-answering task, making it easier for people to find answers and solve issues independently. This in turn will also make Human-to-Human contact more scarce and, consequently, more valuable. It is essential to recognize this shift.
Our goal when using AI tools is to augment human capabilities, not to replace people. This has always been our aim when automating work and tasks: to make work easier and more enjoyable by eliminating the boring and repetitive, allowing the challenging human work to remain.
The risk with AI tools lies in focusing solely on productivity and efficiency. Voys should remain focused on the most relevant: the relationships between the people we support and the connections we facilitate. This is where AI should excel.
AI will indeed make people more productive, including us. However, there are limits to what our brains can handle each day. Be mindful of this. The risk is that the benefits of increased productivity will flow back to shareholders alone. These gains would be preferred to be distributed as value across the entire system, including ourselves. We created the 40-hour work week. As we become more productive and our brains can only manage a certain amount of complex, deep work each day, it is a perfect time to rethink this concept as AI significantly augments our capabilities.
Ethical Principles
The European Commission has drafted Ethics Guidelines that align well with Our core values:
- Respect for Human Autonomy: Human autonomy and dignity are fundamental and should not be compromised by algorithms or AI through automatic analysis or actions. AI should always operate under human control or supervision.
- Prevention of Harm: AI must not cause physical, mental, or social harm. This includes protecting human dignity and integrity. They should be safe, secure, and resilient against misuse. Special care should be given to vulnerable individuals and situations with power imbalances. The environmental impact should also be taken into account.
- Fairness: AI systems must be developed and used fairly, ensuring equitable distribution of benefits and avoiding bias and discrimination. Individuals must be protected from being deceived or unfairly restricted in their choices. Fairness also involves proportionality and the ability to seek redress against AI decisions. This means the entity accountable for the decision must be identifiable, and the decision-making process should be explicable.
- Explicability: Transparency and clarity in AI processes are essential for user trust. AI capabilities and purposes must be communicated openly, and decisions should be explainable. Special measures are needed for ‘black box’ algorithms to ensure traceability and auditability, adapting the level of explicability to the context and potential consequences of errors.
Guidelines
The European AI Act classifies AI systems into four categories based on the level of its risk:
- Unacceptable Risk: AI systems that threaten people’s safety, livelihoods, and rights are banned. This includes systems like social scoring by governments and those using subliminal techniques to manipulate behavior or exploit vulnerabilities of specific vulnerable groups, such as children or persons with disabilities.
- High Risk: If a system poses a high risk to the health and safety or fundamental rights of natural persons, it must meet strict requirements before it can be placed on the market.
- Biometric identification
- Critical infrastructure (water, gas, heating and electricity)
- Education and vocational training
- Employment, workers management and access to self-employment (recruitment tools or making decisions about promotion and/or termination of work)
- Access to and enjoyment of essential private services and public services and benefits (like credit scoring)
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
- Low Risk: This category covers systems with specific transparency obligations, especially regarding natural persons. For instance, users must be informed that they are interacting with an AI system, unless this is obvious. This category includes AI used in:
- Systems used for interactions with natural persons, such as chatbots
- Emotion recognition systems or biometric categorization systems
- AI systems that generate or manipulate image, audio or video content, such as deepfakes.
- Minimal Risk: Most AI systems fall into this category, posing minimal or no risk to users’ rights or safety. These systems are largely unregulated under the AI Act, but are encouraged to follow voluntary codes of conduct. This category includes AI used in:
- AI driven characters in videogames
- Spamfilters
- Limited, Specific Procedural Task: Such as structuring a questionnaire or recognizing duplicates.
- Quality Enhancement: Improving work quality, like converting notes into formal letters.
- Detection of Anomalies: Identifying unusual decision patterns for human review.
- Preparatory Task: Assisting with intake processes, text analysis, or making suggestions.
Four exceptions make an AI system low-risk automatically :
Implications for tools we use
Feel free to use any low-risk AI tool, paying special attention to using AI tools for employment or critical infrastructure purposes.
Don't hesitate to use new low-risk AI features in existing tools, which have undergone GDPR and ISO checks.
Given our open-by-default policy, most organizational information can be used in these tools. Be aware that shared data might be used for training the AI.
AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. Therefore, ‘real-time’ and ‘post’ biometric identification systems should be classified as high-risk. Since voice can be used for identification, using AI systems to use (record) telephony services can be considered high-risk.
However, not every application in these categories is automatically high-risk.
Examples:
- An AI screening student essays for admission to a master's degree is high risk because it determines access to education.
- An AI checking an essay for plagiarism is not high-risk, despite being related to education, as it isn't listed as such.
- An AI evaluating waste placement and issuing fines is not high-risk. Although waste management is an essential service, this specific application doesn't fit the high-risk criteria.
Implications for the tools we develop
We focus on developing low-risk or minimal-risk AI systems, such as:
- Transcription services
- Summarization services
- Sentiment analysis
- Automatic routing
Emotion vs Sentiment analysis
There is a difference between sentiment analysis and emotion analysis. Emotion recognition involves AI systems detecting and interpreting human emotions based on various inputs, including tone of voice. This can only be used for specific medical or safety purposes.
Sentiment analysis involves AI systems analyzing textual data to determine the sentiment behind the text, categorizing it as positive, negative, or neutral. It focuses on the tone and emotional context of written or spoken language. We will only be developing sentiment analysis and will refrain from emotion analysis.
Although not legally required, we will be transparent about the open-source components used and how these features work. We highly value privacy and will run these services on our hardware or regionally hosted servers, ensuring privacy and GDPR compliance or adherence to local privacy laws.
It’s worth noting that any tools we develop that will be released under free and open-source licenses, are not subject to the EU AI act, except when they are prohibited or non-transparent. This also means open-source AI tools we use fall outside of the AI act.
Personal data
Be careful with (personal) data when training our own AI models. We often need a solid, lawful basis and purpose to be able to use this data. Use anonymization and pseudonymization techniques to protect private data while still deriving valuable insights from data.
Implications for third-party tools we offer to clients
We have strategic partnerships with third parties for corporate clients. These partners develop compliant AI tools that run on local data centers, ensuring compliance before being offered to customers.
Before offering the AI systems to customers, the following security measures - in any case, but not limited to these - should be checked:
- Data storage and processing: All data must be stored and processed within the European Economic Area (EEA). We strictly prohibit the transfer of personal data outside the EEA without appropriate safeguards.
- End-to-end encryption: When possible, data should be encrypted from end-to-end to prevent unauthorized access or interception, ensuring its confidentiality and integrity.
- Local storage preference: We prioritize using local storage solutions to minimize the risk of data exposure and maintain greater control over access and security.
- Avoidance of personal data in AI: Personal data should not be incorporated into AI models, particularly when utilizing third-party solutions such as those provided by OpenAI. Anonymization and pseudonymization techniques are prioritized to protect individual privacy while deriving valuable insights from data.
Data ethics
It’s good to know that we also have a Data ethics Policy, which is centered around the following principles:
- Prioritize Human Interests: Always put the data owner's interests first, ensuring they benefit from data use and are not negatively impacted.
- Ensure Control and Transparency: Inform the data owner about data usage, provide clear ways for them to object, and maintain their control over their data.
- Communicate Clearly: Be transparent about the purpose and methods of data processing, and regularly ensure the data owner understands why their data is being used.
Endnote & useful tip
By adhering to these principles and guidelines, we demonstrate our commitment to responsible AI usage, respecting individual rights, and safeguarding the privacy of all stakeholders.
If you have any questions or concerns regarding the use of AI within our organization, please do not hesitate to contact the Security Circle.
Tip: You can ask our own AI Assessment GPT if your AI idea or tool is safe to use. It will give you a Compliance Score, tell you what works great, where you can improve, and provide you with some friendly advice.
Together, we can uphold the highest ethics, compliance, and data protection standards in our AI endeavors.