C J (respond no more than 100 word)
Company Context
TTEC Holdings, Inc. is a global innovator in AI-enabled customer experience (CX) technology and services. Operating in over 20 countries with approximately 60,000 employees, the company provides both the digital infrastructure (TTEC Digital) and the human workforce (TTEC Engage) to handle complex customer interactions for clients in healthcare, financial services, and retail.
Implementation of Dublin-10: Planning Concerns
Beyond the ROI and retraining, a critical management concern for TTEC is
maintaining the “Humanity-First” culture during the transition to AI.As CEO, my primary planning action involves
Strategic Role Redesign. While Dublin-10 will automate routine “tier-1” inquiries (like balance checks or password resets), we must plan for the increased complexity of the remaining human-led work.
·
Action: I will implement a
“Complexity Pay Grade” system. As AI handles the mundane, our human associates will handle more emotionally charged and complex problem-solving. This requires planning not just for training, but for a fundamental shift in our Performance KPIs—moving away from “Average Handle Time” (speed) toward “Sentiment Analysis” and “Empathy Scores.”
Lisa (Respond no more than 100 words)
Professor and Classmates,
I chose to create FRESHMART Grocery for this weeks discussion. It is a small to mid-size grocery chain with 85 stores across the Midwest. I chose to create a video using PowerPoint and Adobe Premier. While you never see me, the voice is all mine. I have to admit it is a little long winded. It is actually a little over 4 minutes. There was just so much to cover I couldn’t think what needed to be cut. I look forward to hearing from you.
Rafael Docarmo (respond no more than 100 word)
One legal and ethical operational risk I see as especially compelling for businesses today is the unchecked use of artificial intelligence without adequate internal controls or human accountability. As organizations adopt AI to improve efficiency, decision speed, and scale, the risk is not the technology itself, but leadership treating AI as a substitute for judgment rather than a tool that still requires oversight. When controls fail, AI can amplify bias, violate privacy laws, and expose businesses to regulatory and civil liability.
A practical example of this risk is the use of AI systems in employee evaluations, hiring decisions, or workforce optimization. If an AI tool is trained on flawed historical data or deployed without transparency, it can unintentionally discriminate against protected classes or make decisions that cannot be clearly explained or defended. From a legal standpoint, this exposes the organization to employment discrimination claims and regulatory scrutiny. Ethically, it erodes trust by removing fairness and accountability from decisions that directly affect people’s livelihoods.
A best practice to mitigate this risk is the implementation of a structured AI governance model that mirrors traditional internal control frameworks. This includes clearly defining where AI may be used, establishing approval and review processes, requiring human-in-the-loop decision-making for high-impact outcomes, and documenting how AI-generated recommendations are evaluated before action is taken. AI should support leadership does not operate independently of it.
To implement this practice, leadership should treat AI systems the same way they treat other mission-critical processes. Before deployment, organizations should conduct risk assessments that evaluate data integrity, bias potential, legal exposure, and cybersecurity vulnerabilities. Once operational, periodic audits should be conducted to ensure outputs remain compliant and ethical. Employees using AI tools must also be trained to understand their limitations and to recognize when escalation to human decision-makers is required. Most importantly, accountability must remain clearly assigned AI does not make decisions; leaders do.
In today’s business environment, the failure to govern AI properly is not just a technical issue—it is a leadership failure. Businesses that proactively establish strong oversight and ethical guardrails will be better positioned to leverage AI’s benefits while avoiding the operational, legal, and reputational damage that comes from uncontrolled automation.
Regina (respond no more than 100 word)
The misuse of Artificial Intelligence (AI) is an operational risk that I see to be a compelling concern for a business today. As the use of AI continues to grow at a rapid pace, there will be a race to create policies, procedures, regulations, and oversight for its use. AI is able to help improve productivity, hiring processes, and managing daily operations. However, there are concerns with the use of AI such as privacy concerns and bias which can have ethical and legal risks. Examples of privacy concerns, such as excessively monitoring employees, collecting personal data without consent, and improper handling of healthcare information can create ethical and legal concerns. Bias from the use of AI can cause legal concerns such as discrimination.
The best practice to mitigate the risk of privacy concerns and bias is to maintain compliance regarding practices related to AI and to make sure that the business is up to date with all requirements. According to GRC Report (2025), failure to comply with and address emerging legal frameworks can result in sanctions, fines, and eroding public trust. Having clear policies and strong oversight can reduce risk of AI misuse. Not only does this help with compliance, but it also makes sure that the company is fair and transparent.
To implement this practice, I would recommend implementing an operational risk management team. The team can identify, assess, and mitigate the risks that are associated with AI (Redcliffe Training, 2024). The team can leverage technology to assist in implementation, such as automated compliance checks and using data analytics to manage risks (Redcliffe Training, 2024