Assessing progress

Best Practices for Effective Assessments

This section describes some best practices as an overall procedure that helps you maximize the value of AppFaktors assessments.

Step 1: Justify the Assessment

You can use AppFaktors assessment questionnaires for applications and application artifacts. You can snapshot a current situation or can use assessments to determine “before” and “after” situations that can shed light on any changes that may be needed to comply with organizational standards.

The following questions will help you determine why you are performing an assessment:

  • What do I need to know?

  • Why do I need to know it?

  • What will happen because of this questionnaire?

  • Do I need to conduct this assessment, or can I get this information from existing sources?

It's a good idea to start with contextual questions:

  • What is the proposed change to the application?

  • How many user groups will be impacted?

  • What is the priority for this change?

Contextual objectives can help you determine:

  • The degree of change to the new or existing architecture.

  • How the proposed change(s) will impact the architecture.

  • The completeness and validity of the proposed change.

Step 2: Determine What You are Measuring.

This step determines the scope of your assessment, which should align with your organization’s architectural program and how it evaluates development outcomes and impact. For example, you can measure:

  • An architecture model and views for specific analyses

  • Organization goals and Business objectives

  • ASRs

  • Policies and controls

  • Risks

  • Roadmap implementation status

  • Generic knowledge gathering about an application and/or its artifacts

  • Application maturity

Step 3: Determine Who to Ask

Who are the appropriate people to include in this assessment?

Note: A high percentage of responders – define high percentage – is crucial for ensuring truly representative results.

Step 4: Consider Your Audience

Think carefully about who you use as Assessment Responders. Some key factors include:

  • Job title, role, and responsibility

  • Skill level

  • Familiarity with the application and architecture being assessed

  • A variety of tenure times and seniority levels to ensure both “fresh” and “seasoned” perspectives

Consider testing your assessment with a few people who are similar to your proposed Assessment Responders. This can help you add clarity, refine process, and detect errors before the assessment formally begins. A good questionnaire with well-worded questions will reduce systematic measurement errors and improve the validity of your assessment.

Step 5: Consider a Confidential Assessment

When you conduct a confidential assessment:

  • Individual responses are not shared with anyone, nor does AppFaktors use this assessment information for any other purpose.

  • Assessment participants at all levels are responsible for maintaining confidentiality.

When you conduct an open (or “shared”) assessment:

  • Everyone always sees everyone else’s responses. This facilitates collaboration and review and also helps ensure responses when not all Assessment Responders are equipped to answer all questions.

  • The possibility of conducting duplicate assessments is reduced.

  • Some people may not answer forthrightly, and seeing other people’s answers may trigger additional, potentially problematic interactions.

Step 6: Choose a Measurement Scale and Scoring

Use scales appropriate to the Assessment Responders that provide the information you need. AppFaktors assessments include scorecards that measure Assessment Responder/Reviewer engagement and the assessment outcome. Objective questions lend themselves to more accurate scoring. Some examples include:

  • Fixed response: Questions must be answered from among several options, such as yes/no, true/false, multiple choice, and agree/disagree.

  • Open-ended: Questions must be answered via open narrative.

Step 7: Choose a Meaningful Title

A good assessment title tells Assessment Responders what the assessment is about. You should also add a brief statement of purpose.

Step 8: Author the Assessment Questions

You are now ready to create the questions that will form the body of your assessment. When authoring questions:

  • Begin with easy questions. This helps Assessment Responders feel comfortable, especially when they are both relevant to the assessment title and purposes and easy to answer.

  • Add tips to guide Assessment Responders through completing each section.

  • Keep the questionnaire as short as possible without jeopardizing reliability by focusing on “need to know” information over “nice to know" information.

  • Put your most important questions up front. Assessment Responders may burn out or rush later questions.

  • Ask one question at a time to avoid potential confusion from “double barreled” questions, such as:

    • Do you rate security and performance higher?

    • Do you increase the cost or lower the performance resource demand?

  • Minimize bias by avoiding loaded questions, such as whether teams comply with organization standards and/or industry best practices.

  • Arrange your questions in a logical order, such as by grouping similar questions together.

  • Minimize open-ended questions that can make your assessment feel like a university exam. Open-ended questions also complicate summarizing and scoring. That said, you can test a proposed questionnaire by using open-ended questions that can help inform fixed-response questions (e.g., multiple choice) and how to categorize them.

  • Provide space for Assessment Responders to expand on their answers. Invite them to leave additional comments or make more suggestions.

Step 9: Create a Cover Section

This goes that the beginning of your assessment and tells people:

  • Why you are conducting the assessment and how responses will be used

  • Who sponsored this assessment

  • Why responding to this assessment is important

  • When questionnaire responses must be received (the deadline)

  • How to address questions that may arise

  • How to use the Engagement Scorecard and action items

Step 10: Check Reliability

Reliability measures the consistency of questionnaire responses. Removing or reorganizing “quirky” or unclear questions reduces random errors and increases reliability.

Step 10: Conduct the Assessment

This is a simple four-step process:

  1. Administer the questionnaire.

  2. Analyze the responses.

  3. Determine findings.

  4. Report the results.

See Assessment manager for instructions on using the Assessment Manager to implement these best practices.