In its 2025 Accountability Report on AI Ethics, LG AI Research highlighted its focus on translating ethical principles into operational practice. The organization strengthened its internal governance, expanded tools that help identify risk, and emphasized processes that make AI systems more reliable and fair.
Key initiatives included:
- Scaled AI Ethical Impact Assessments: Roughly 60 projects underwent structured review, identifying 219 potential risks and closing about 82% of them. The remainder are linked to future projects, ensuring that mitigation measures continue to be addressed over time. This process helps ensure that ethical considerations influence projects early, rather than after deployment.
- Enhanced AI risk taxonomy: LG expanded its K-AUT framework to 226 detailed risk categories, covering areas such as privacy, social safety, and emerging risks in advanced AI systems. This taxonomy guides consistent evaluation across teams.
- Model safety verification: The organization employed internal and external red-teaming and used its KGC-SAFETY benchmark to test models across multilingual and adversarial scenarios, improving resilience and reducing unsafe outputs.
- Data provenance and compliance: Using EXAONE Nexus, LG introduced automated data tracing that achieved 81% accuracy and operated 45× faster than human review—helping identify copyright risks in large training datasets.
Together, these efforts illustrate LG AI Research’s commitment to building systems that are transparent in their development, thoughtful in their use of data, and proactive about the real-world risks AI can introduce.
The IEEE SA Partnership: Strengthening Verification and Raising the Bar
A major milestone highlighted in the report is LG AI Research’s formal collaboration with the IEEE Standards Association (IEEE SA). In 2024, LG became the first organization in Korea qualified as an Authorized Assessor for the IEEE CertifAIEd™ program, a global initiative that evaluates AI systems across the pillars of Accountability, Privacy, Transparency, and Algorithmic Bias. Through this partnership:
- LG AI Research began conducting official IEEE CertifAIEd assessments, applying the program’s structured verification process to real AI products.
- The collaboration supported the certification of LG Electronics’ ThinQ ON, which became the first AI product globally to receive IEEE CertifAIEd™—a result verified through IEEE SA’s independent multi-stage review process.
The report details how this certification process works, from determining assessment scope to documentation review and IEEE SA’s independent validation. CertifAIEd™ provides a repeatable way for companies to demonstrate that their AI meets recognized ethical benchmarks before entering the market.
LG’s participation in this program is significant not just for the company, but for the broader AI ecosystem. By applying standards-based evaluation internally and sharing insights externally, LG is helping advance the practical adoption of AI ethics frameworks beyond regulatory compliance.
What’s Next: How Other Organizations Can Pursue Responsible AI Certification
As global expectations for safe and trustworthy AI continue to grow, more organizations are exploring structured ways to demonstrate responsible development. IEEE SA’s CertifAIEd™ program provides one such pathway, offering:
- Assessment of AI systems against established ethical criteria
- Professional training for teams involved in AI design, risk, and compliance
- Curriculum options for organizations and academic institutions that want to integrate responsible AI concepts more formally
These options allow companies to start where it makes sense for them—whether by validating a product, training internal experts, or laying the foundation of knowledge across their workforce.
The example set by LG AI Research in 2025 shows the value of combining internal governance with external verification: organizations gain clarity, customers gain confidence, and the industry gains a more consistent standard for responsible AI.
A Path Forward
LG AI Research’s progress reflects a larger shift underway: moving from high-level ethical goals to concrete, testable practices. Its collaboration with IEEE SA demonstrates how independent assessment can complement internal governance, offering transparency and reinforcing accountability at scale.
Organizations seeking to strengthen their own responsible AI programs can look to this model, pairing in-house controls with recognized external standards, to build systems that earn trust and meet global expectations.
Learn more about IEEE CertifAIEd and how it can help strengthen your AI solution or become an IEEE authorized collaboration partner.




