
Artifact 1: New Project Team Survey for Evaluating a Team in Motion
Created for EDIT 7550E: Management of Instructional Technology Projects
When I was new to the idea of instructional design, I solely associated evaluation with procedures like learner outcome testing or measuring the success of a course. But the more I practiced ID, it became clear that more times than not, evaluation starts before a single deliverable is drafted. That was the case with the Preliminary Team Information Survey I designed during the kickoff phase of the OLOD Roadmap Infographic Revamp project for UGA Extension.
Our team was new to working together, with only a few of us knowing one another already from prior classes. We were navigating a complex client request and we had exactly seven weeks to
Click on the Above Image to View This Artifact
deliver a fully revamped instructional job aid. As the project manager on this team, the first thing I wanted to know was “Who are we, really, and how do we work together comfortably under the constraints of this project?” So I made a survey.
At face value, the Preliminary Team Information Survey collected what you’d expect: availability, working styles, comfort levels with tech, and communication preferences. But baked into those responses was something more important: a baseline against which we could measure our team’s cohesion and effectiveness as the project progressed. I wasn’t just trying to gather facts, I was beginning the process of evaluating the team as a learning system in itself. The idea of a survey came to me early on in the process and proved to be an incredibly helpful and intuitive tool to measure the progress and effectiveness of my team as we went along.
Creating the survey helped me put the IBSTPI Evaluation competencies into immediate practice. According to the standard, instructional designers must be able to “design and implement formative evaluation strategies” and “use multiple evaluation methods to assess the adequacy of instructional and non-instructional interventions.” This project demanded both. The survey served as an initial formative evaluation of our internal team dynamics, highlighting what might support, or potentially threaten, our collaboration down the line. It also gave me a framework for observing team health throughout the project’s lifecycle.
Specifically, I used the survey to measure whether the team had the conditions necessary to succeed. Were we aligned on expectations? Did we understand each other’s working rhythms? Did everyone feel comfortable raising
concerns? These may not be traditional metrics, but they were essential for evaluating the effectiveness of our team structure—especially since our team was operating across multiple time zones and conducted the entirety of our work remotely.
What made this project especially meaningful from an evaluation standpoint was how the survey directly informed my decision-making as project manager. When all team members indicated their communication, meeting, and document storage preferences, I created team channels and established immediate norms for the use of each which drastically reduced the amount of time start-up took for us compared to what I have experienced on other teams with less emphasis on the personal choices of each member. When a teammate noted a strong preference for meetings to end before 7pm at the latest due to the fact that she began her work days at 4am, I created a consistent recurring meeting schedule with hard stops built in that accommodated everyone’s availability and preferences like these. In short, the survey didn’t just collect data: it allowed me to act on that data in real time.
This project also reinforced my belief that evaluation should never be an afterthought. Initially, I viewed evaluation as something I would only do at the end of a project once everything’s built and launched, but this survey allowed me to practice a more holistic model: one where evaluation begins with relationships, intentions, and conditions. That’s something I want to carry forward into every client engagement I take on. Ultimately, it served as a perfect opportunity to practice the belief that evaluation is as much about asking good questions as it is about measuring outcomes.
⬨⬨⬨
Artifact 2: Usability Test Checklist for Portfolio Website
Created for Beta Testing of this Portfolio
Admittedly, evaluation is the competency area in which I have the least experience and confidence. I decided to showcase a usability checklist I designed to evaluate an early version of this portfolio site. I had spent so much time building a site that reflected my skills and personality as an instructional designer, but I needed to remind myself to stop and systematically check whether the navigation made sense, the links behaved how they should, or the user experience flowed naturally. I was fortunate to have a few friends who functioned as beta users and were the recipients of the usability test checklist. I also used the checklist several times as the prototype came to life.
Click on the Above Image to View This Artifact
The checklist itself was designed to be functional, straightforward, and comprehensive. I structured it around five major areas: external links, internal links, navigation and orientation, content clarity, and UX design (including accessibility considerations). For the External and Internal link sections of the checklist, I included a statement regarding the intended function of the element and checkboxes to indicate 1. whether or not the link did what it was supposed to and 2. whether or not it opened in a separate window as intended (this applies only to the external links). For the other areas, I developed affirmative usability statements such as “The menu options were clear and labeled in a way that made sense to me” and “The site felt consistent and well-designed throughout.” These statements provided beta testers with clearly identified characteristics of the ideal state and allowed them to answer “yes” or “no.”
Creating this checklist gave me a new appreciation for how nuanced evaluation really is, especially when the thing being tested is
something you built yourself. It forced me to get really specific about what “functioning” and “clear” actually mean and define those terms in observable, testable ways that I could easily communicate to others using this tool.
It also pushed me to think critically about how I revise and improve based on evaluation data. As I received feedback and completed checklists from my beta users, several small but important issues such as mislabeled buttons and incorrectly formatted descriptions were brought to my attention. Addressing these details didn’t just fix the technical issues, but also genuinely improved the site for future users.
Although this form of evaluation wasn’t I wasn’t anything like a summative evaluation of a formal learning experience, the checklist required the same attention to usability, learner needs, and design alignment that I would bring to any instructional project. It gave me an opportunity to test and refine my skills in building feedback tools that are focused and applicable.

