Our Services

Get 15% Discount on your First Order

[rank_math_breadcrumb]

evaluation1

Name of person/ project that you are providing feedback for: _________

1. Briefly describe the training and evaluation 
2. Comment generally on what you like about your colleagues training module and if you think their evaluation can capture efficacy in the training. (If not, what could strengthen their evaluation/evaluation process)?
3. Share at least 2 aspects of the training that you find most effective and tell us why. 
4. If you have a critique/critiques, please share those and accompany your critique with suggestions for an iteration.
5. Finally, share 2 specific suggestions on how to make the best aspects of the training even better.
6. OPTIONAL: Any additional comments/thoughts/constructive suggestions, please share them

I posted the lesson in another attachment under powerpoint.

____________________________________________________________________________________

Below is an example of what it should look like where other students have responded to the Eddie Almada De La Vega powerpoint:

Name of person/ project that you are providing feedback for: 

Eddie Almada De La Vega

1. Briefly describe the training and evaluation

This training module, 
“Data Fluency for Decision Makers,” is tailored for non-technical business users at Kueski who need to develop foundational SQL skills to support faster, data-driven decision-making. The course addresses a key organizational bottleneck: the overburdening of the Analytics Engineering team with ad hoc requests from teams that lack data fluency. Through three interactive activities—writing SQL queries to solve real business problems, peer review and query redesign, and dashboard creation—the training scaffolds learning in a hands-on, outcome-focused manner. Participants use actual company data within Databricks, enabling immediate application to real-world scenarios. Evaluation occurs via Google Forms and includes activity-based assessments and a final training evaluation form. These tools capture learning outcomes, comprehension, and learner feedback. The training aligns with Kueski’s strategic mission to become a data-first fintech company by empowering decision-makers with the autonomy to generate insights, enhance agility, and reduce dependency on centralized analytics support.

2. Comment generally on what you like about your colleague’s training module and if you think their evaluation can capture efficacy in the training

Eduardo’s training module is thoughtfully crafted and aligns well with organizational goals and participant learning objectives. I particularly appreciate the real-world relevance of the content—participants don’t just learn abstract SQL concepts but apply them immediately to business challenges like tracking GMV or fraud rejections. This practical, problem-based learning model ensures that participants see the immediate value of their skills. The evaluation approach—via activity-based Google Forms—helps measure comprehension and engagement during each phase. However, while the assessments capture skill acquisition, they may not fully capture long-term behavior change or decision-making impact. To strengthen the evaluation, I recommend introducing a pre- and post-training quiz to measure knowledge gains and confidence levels. In addition, integrating follow-up manager feedback or observational check-ins (e.g., 30 days later) could help assess the degree to which learners are applying skills on the job. These enhancements would help demonstrate both efficacy and return on training investment.

 

3. Share at least 2 aspects of the training that you find most effective and tell us why

Two elements of this training stand out as especially effective: (1) the use of real data from Kueski’s actual Databricks environment and (2) the peer review and redesign activity. By requiring learners to interact with real business data—rather than hypothetical examples—the training immediately boosts relevancy and confidence. Participants gain fluency in querying the exact tables and fields they’ll encounter on the job, which lowers barriers to adoption and accelerates practical usage. The peer review and redesign task is equally powerful, fostering a culture of collaboration and shared learning. This not only sharpens SQL logic through exposure to alternative solutions but also encourages critical thinking around readability, accuracy, and performance. Learners are encouraged to see query writing not just as a technical task, but as a strategic, iterative process. Both components foster applied learning and cross-functional understanding, crucial to building a truly data-driven decision-making culture across the organization.

 

4. If you have a critique/critiques, please share those and accompany your critique with suggestions for an iteration

One potential critique of the module is that while the technical exercises are excellent, the training assumes a baseline familiarity with tools like Databricks and SQL editors, which may not be true for all participants. For those completely new to querying or data environments, the pace and complexity could feel overwhelming. To make the learning more inclusive, I suggest adding a 
short optional pre-module or self-paced onboarding video introducing Databricks basics, SQL syntax, and navigation tips. This would level the playing field and help ensure all learners start with a shared foundation. Additionally, while the activities focus well on 
what to do, there could be greater emphasis on 

why
 certain SQL strategies are more efficient or insightful in a given context. Including a “trainer’s insights” section or a
 post-activity debrief after each task would provide additional context and depth, reinforcing strategic thinking alongside technical execution.

 

5. Finally, share 2 specific suggestions on how to make the best aspects of the training even better

To enhance the best aspects of this training—particularly the hands-on data querying and peer collaboration—I suggest the following two improvements. First, include a curated library of example queries or “SQL patterns” participants can use as references post-training. These patterns could map to common business questions and promote knowledge retention while accelerating real-world application. Second, build a peer support structure post-training, such as an internal SQL Slack channel or weekly office hours where learners can share dashboards, ask questions, and receive mentorship. This would extend learning beyond the classroom, strengthen cross-functional data fluency, and build a supportive culture of continuous improvement. These suggestions would help in creating the training even better with lasting skill development and organizational impact.

2nd example

Name of person/ project that you are providing feedback for: Lavonzell Nicholson

1.
Briefly describe the training and evaluation

Lavonzell’s session kicks off a three-part series designed to help staff feel more confident using AI, especially when it comes to writing better prompts. Module 1 lays the foundation by introducing a simple but powerful structure for building prompts: six key elements (Task, Context, Role, Instructions, Expectations, and Example).  The training is super practical. It’s not just theory, rather participants actually get to break down weak prompts, improve them, and apply what they’ve learned to their own work. There’s no formal test, but the learning is evaluated through hands-on practice and real-time feedback, which aligns really well with how adults learn best.

2.
Comment generally on what you like about your colleagues training module and if you think their evaluation can capture efficacy in the training. (If not, what could strengthen their evaluation/evaluation process)?

What I really liked was how clearly the training connects to both a real need (low confidence with AI) and a bigger goal (producing faster, sharper research). The structure is clean, the examples are relevant, and the activities are useful right away.  Using prompt analysis and rewriting as a way to evaluate learning is a smart move, and it’s low-pressure but still shows whether people are getting it. A quick self-check or peer review could help reinforce the learning, even a few reflection prompts like “Can I spot all six components in a prompt?” would go a long way in helping people track their growth.

3.
Share at least 2 aspects of the training that you find most effective and tell us why.

1.
The six-block framework: It breaks down something that can feel abstract (writing good AI prompts) into clear, repeatable steps. It’s easy to remember and immediately useful and its exactly what adult learners need.

2.
Ties to the bigger picture: The training doesn’t feel random or inconsistent. It’s clearly part of a larger effort to help CRPE work smarter and faster. That’s the kind of alignment that helps people see the value and stay engaged. It’s not just “learn this tool,” it’s “here’s how this helps us do better work.”

4.
If you have a critique/critiques, please share those and accompany your critique with suggestions for an iteration.

The training could benefit from a bit more interaction. The exercises are a great addition, but adding some peer discussion or group critique (even virtually) would expand the learning. Tools like shared docs, polls, or breakout rooms could make it more dynamic.  Also, it might help to show the full roadmap of the three-part series early on. A simple “You are here” slide with a quick overview of what’s coming next would help learners see how this module fits into the bigger journey. And a downloadable cheat sheet or job aid would be a great resource for folks to reference later.

5.
Finally, share 2 specific suggestions on how to make the best aspects of the training even better.

1.
Make prompt comparisons into a mini-game: Turn the strong vs. weak prompt examples into a quick, interactive game. Show a few anonymous prompts and have people vote on which ones check all six blocks. It’s fun, low-stakes, and reinforces the framework.

2.
Create a “Prompt Planner” worksheet: Give learners a simple template where they can fill in each of the six blocks before writing a prompt. It’s a great way to build muscle memory and helps them apply the framework consistently, especially as they move into the more advanced modules.

·
Reply to post from Robert Puckett IiReply

·
Mark as UnreadMark as Unread

Share This Post

Email
WhatsApp
Facebook
Twitter
LinkedIn
Pinterest
Reddit

Order a Similar Paper and get 15% Discount on your First Order

Related Questions

benchmark 3

I have added benchmark 1 for your review.  There is no benchmark 2, so proceed with #3 Benchmark #3: Literature Review and Methodology  · Due Sunday by 11:59pm   · Points 150   · Submitting a file upload For Benchmark #3, please include the  most current, updated drafts of the following: 1. Your methodology section if

ogl 570 wk 2

OGL 570 Instructions In each module, you will have the opportunity to participate in an online discussion forum.  This week, you have a  choice of which prompt to respond to. Hot Tip:   It might be helpful to copy-paste the discussion prompts into an offline document (Word, for example), compose your responses

Module 3 Case

Module 3 – Case Metrics & Analytics; 360-Degree Feedback; Internships/Apprenticeships Assignment Overview Signature Assignment: Quantitative Reasoning, Reinforced Level In this assignment, your quantitative reasoning skills will be assessed at the “reinforced” level. The Quantitative Reasoning rubric will be useful for this purpose. In MGT511, quantitative reasoning skills were assessed at

module 3 SLP

Module 3 – SLP Metrics & Analytics; 360-Degree Feedback; Internships/Apprenticeships 360-Degree Appraisal The 360-degree appraisal approach entails collecting performance information from several workers who interact with the employee being evaluated. For example, information can be collected from supervisors, subordinates, customers, and peers. In some situations, employees also evaluate their own

Module 2 ogl 551

Module 1: Learning Materials IMPORTANT NOTE :   Anything in the  “Learning Materials” content area  (excluding “Supplemental Learning Materials”)  is “fair game” for the quizzes(as is any other content presented in the module).  You’ll also be doing some analysis and synthesis types of writing in this graduate-level course, so be sure to read

Discussion 3

Discussion 3 A common form of training is cooperative training. There are two widely used cooperative training methods: internships and apprentice training. In both forms of training, there is a combination of classroom training (formal education) and on-the-job training (experience) that can be used for career development. Based on your

Human Resources

Job Interview.  [ NAME OF THE COMPANY ] JOB TITLE: [ ] FSLA EXEMPTION STATUS: [Exempt or Non-Exempt] JOB LOCATION: [City, State] WORK SCHEDULE: [Days, Time] OVERVIEW OF WORK: [Purpose of the position and general summary of the job] TASKS: [5-8 Statements of Tasks · · · · · ·

human resources

The Social Security Benefits Act, which created the Social Security program in the United States, was signed into law by President Franklin D. Roosevelt in 1935 as part of the New Deal. At its core, the program provides financial support to Americans during retirement, in cases of disability, or after

Performance Appriasal

Ethical Performance Appraisal Issues Performance Appraisal  The type of performance appraisal in the workplace. Please see attached

Human Resource Management Homework Assignment

Human resource management assignment is located in file uploaded Prior to beginning work on this assignment, review 5 Keys to Success as an HR Department of One Links to an external site. and refer to the HR Department of One: How to Succeed Links to an external site. webpage for

human resources

The Salary History Trap Jake is interviewing for a marketing coordinator position at TechCorp, a growing software company. He is currently earning $38,000 at his small nonprofit job but knows that similar positions at tech companies typically pay between $55,000 and $65,000. During the interview, everything is going well until

Week 2 605

 research in ways that directly relate  For this learning assignment, you will begin the process of research in ways that directly relate to the final paper. This week you will write on your chosen topic from the list below using  one scholarly article that you find in the APUS Library or

CASE 3

Module 2 – Case Social Media and HR; Behavioral Anchored Rating Scales; Simulation Training Case Assignment After reading the required materials on social media found on the Background page for this module and your own library research, prepare a  4- to 5-page paper addressing the following: Discuss the influence social media

SLP 2

Module 2 – SLP Social Media and HR; Behavioral Anchored Rating Scales; Simulation Training Behaviorally anchored rating scales (BARS) directly assess performance  behaviors. The BARS method depends on critical incidents or short descriptions of effective and ineffective behaviors that ultimately produce a number value. The assessor is responsible for rating the

HR Final

 As managers and HR professionals, we must be proactive in addressing possible. 

discussion 2

Module 2 Discussion A popular type of training at all levels of an organization is simulation training/development. Simulation training uses a duplicate work environment that is set up independently of the work site. In this setting, trainees can learn under realistic conditions, but away from the pressures of interruptions. Based