Training Technical Group

 View Only

Congratulations to the COPE Project Team

  • 1.  Congratulations to the COPE Project Team

    Posted 02-05-2025 11:02

    Large agencies within the U.S. Federal Government contribute to a pair of programs called the Small Business Innovative Research (SBIR) and Small Business Technology Transfer (STTR) programs.  SBIR/STTR projects represent competitively awarded three-tiered research and development efforts.  In Phase I, the winning teams demonstrate the feasibility of their respective concepts.  In Phase II, a subset (normally one) of the Phase I winners is selected to develop a proof-of-concept prototype.  In Phase III, the researcher works with any interested end-users to mature the prototype for operational use.

    This past year, researchers from Sonalysts, Inc. teamed with researchers from CUBRC, Canisius University, and Drexel University to explore ways to improve the knowledge and skills of cybersecurity professionals who protect domestic Information Technology (IT) systems.  One part of the Phase I effort demonstrated that it was feasible to automatically identify the knowledge and skills in high demand within the cybersecurity community.  Another examined how a standards-based approach to learner records management could be used to track learner mastery and support credentialing.  A third explored how best to provide a virtualized practice environment.  This post addresses the fourth component of the effort.

    It is very unusual to include human subjects research in a Phase I STTR.  The resources available for the effort and the associated timeline often make conducting such research impractical.  In this case, the program sponsor got permission to require human subjects research in Phase I.  Specifically, the sponsor wanted us to develop and evaluate a sample course that would deliver cyber instruction to prospective IT professionals.  As requested, we developed a small course. We delivered it to learners using either conventional classroom instruction or closed-loop adaptive training that assembles media- and simulation-based instruction and assessment items in real time to meet learners' needs.  The scope of the Phase I effort limited our demonstration course to a single unit with two knowledge-based learning objectives (LOs) and one skill-based LO.  As a result, our written post-test was limited to six items (three per knowledge-based LO), and the practical post-test had only two scenarios.  We were skeptical that we would be able to show anything.  We imagined that all the students would do very well with such small tests, and we wouldn't be able to tell the difference between the two groups.

    We were wrong.  The figures below show the results of the written and practical post-test comparisons. 

    First, consider the six-question written post-test.  As shown below, the closed-loop adaptive training group participants showed much less variability than those in the instructor-led group and were all clustered at the very top of the scale.  Even when we adjusted for this unequal variance between the two groups, their difference was statistically significant using a very stringent standard.  Further, we assessed effect size using Cohen's d, and the observed effect size was 2.3, indicating very, very little overlap between the distribution of scores we would expect from instructor-led and closed-loop participants.

    A summary of the results of the written post-test.


    A very similar pattern was shown in the practical exam.  Again, closed-loop participants had less spread in their scores because they were largely clustered near the top of the scale.  Because there were only two items on this test and we had that unequal variance, the best we could do was marginal statistical significance and a medium-to-large effect size that indicated that the "average" learner in the closed-loop group would perform better than about 75 percent of the learners in the instructor-led group.

    A summary of the results of the practical post-test.


    We were, to say the least, thrilled with this outcome.  Seeing differences this big and consistent across only two items was a huge surprise and gratifying.  I want to thank the entire COPE team and our sponsors at DARPA.  Of course, the summary I provided is my own and does not necessarily reflect the position or the policy of DARPA or the Government as a whole, and no official endorsement should be inferred.



    ------------------------------
    James E. McCarthy, Ph.D.
    Vice President, Instructional Systems
    Sonalysts, Inc.
    Fairborn, OH 45324
    937-429-9711 Ext. 100
    ------------------------------