So far, in our work towards the Learning Science book, Sae, Geoff, and I have written some thoughtful articles about complex ideas like Learning Ecosystems, Social Metacognition, and even the Nature of Knowledge itself. We’ve tried to provide a thoughtful, practical, and research-grounded narrative.
But… what if there were an easier way… a secret, accelerated path to success that skipped past the lengthy analysis and plodding methodologies?
In today’s post, we’re offering a handy guide that will help you to add sparkle to any idea, and provide the tips you need to wow clients and partners with the thinnest veneer of empiricism and credibility – without all of that boring work.
After all, on the 1st April, why strive and struggle, when there is a shortcut?
Read on for out top 10, all powerful tips that can turn your lacklustre L&D report into a powerful, scientific research paper, to impress your friends, and wow your boss:
Part 1: Finding Published Research
Our first ideas involve creative ways to use published research (the best sort, right?) to build on an academic foundation and prior evidence. These tips will help you create [the appearance of] rigour and quality, and let everyone know you’ve done your ‘due diligence’. Pretty much whatever you are looking at, you can find some relevant articles to support it… Here’s how:
- Use evidence from individual laboratory studies (because the real world is just like a lab). Nearly every concept in L&D has been researched… somewhere. Frequently, these studies are conducted in deliberately constructed environments with the object in question (such as a particular instructional method) carefully isolated for evaluation in a helpful (unrealistic) vacuum. Some of these studies include just a handful of participants, and about half of the time, their positive results are just a statistical fluke  — so you’re certain to find a publication with some shiny statistics to support whatever you might claim! When you have found it, just paste the citation into every document.
- Be creative in how you generalise research (because ‘context’ is just a detail really). Closely related to the recommendation above, this next piece of advice is to apply research findings broadly and into new spaces. Don’t worry about the populations involved, whether the learning conditions were realistic, or if the results are replicable. Just search online, find an article with good numbers, et voilà! If you need a good example of this, just look at the work on Growth Mindset: Empirically examined in one domain and context, and then widely generalised as if it were a universal ‘thing’.
- Emphasise the ‘statistical significance’ (because like probably 99% of the time you’ll be right). A lot of people aren’t well-versed in parametric statistics, but most people in the L&D community have probably heard of ‘alpha’ or ‘p-value.’ Your best approach is to showcase that statistic prominently, and when you cite foundational research, make sure to emphasise its p-value (for example, “p < 0.05”). Consider adding exclamation marks! We advise liberally using the phrase ‘statistically significant’ or even just ‘significant’ when referring to research with a p-value of less than 0.05. Using the word ‘significant’ lends credibility to the research findings. Consider this the research equivalent of ‘artisanal’ (when describing cheese) or ‘craft’ (when discussing beer).
Part 2: Creating Original Research
Next, you probably need to do some ‘validation’ research on your own, so you can ‘prove’ that your specific L&D offering is effective. Here are some tips for getting the best results:
- Use a pretest/posttest design (because ‘better’ is ‘better’ in every case). Let’s say you’ve made a new training program and want to show how awesome it is. (Think: performance review coming up soon.) You need to collect some training outcomes. Here’s how to make sure they look good. First, give participants a pretest, then do your training, and afterwards give them a posttest. Don’t worry too much about what happens in between the tests. You’ll probably get a medium effect size improvement  just from the retest effect  alone – which is like free progress really. And magically, this works best if the two tests are the same, but that’s not actually even a requirement. Adding some test-prep and coaching into your training, you’ll get even bigger results!
- Use Placebo and Hawthorne effects liberally (because if you can measure it, it counts). The medical community has studied  placebos extensively and found them to have massive impacts. Although the percentage varies depending on a study’s purpose and participants, it’s often around 20–30% – but can be upwards of 72% . A related organisational phenomenon is the Hawthorne effect , which basically shows that when workers are given special attention and observed, their performance increases. So, you can easily find impressive results simply by creating an intervention that piques learners’ Placebo/Hawthorne responses. Just fuel their expectations, give them some attention, and make sure they know that you’re watching. This technique is particularly useful as it liberates you from actually creating effective learning.
- Design your experiment for success (because, why take the chance?). Once you have your pre- and posttests and Placebo/Hawthorne triggers ready to go, it’s time to create the study’s protocol. Some experimental designs work better than others. Specifically, you’ll have bigger effect sizes  if you use (a) correlational or quasi-experimental designs (in other words, avoid participant randomization and blind/double-blind assignments!), (b) proximal testing (evaluations that closely mirror the intervention and are completed close to it, like a written posttest completed shortly after training), and (c) a small population (stick to fewer than 500 people). Remember: we use veneers because they allow us to use a valuable material (the beautiful veneer) very cost effectively. Think of your time as the veneer: the more you save, the more of The Mandolorian you can catch up on later.
- Count everything (because MEASUREMENT FOR THE WIN). We’ve already talked about collecting pre and posttest outcomes, but you’ll need more than that! Collect data on everything, so that you have a lot to play with after the experiment. Start by asking for detailed demographic data, because you might find your experiment works best for left-handed, bilingual women ages 25 to 50 – so you’ll need all of those variables in-hand to find that needle in the haystack. Next, collect data on anything that’s countable, for example, number of hours spent in training or number of words read. You can also selectively count parts of self-response surveys, such as the number of items rated above ‘satisfactory’.
- Use statistical tricks (because it’s not cheating if it’s just maths). If you’ve followed our prior recommendations, then you already have some impressive results, but if you’re still struggling (or want to boost the results further), you can massage the data. There’s a large toolkit of data-dredging hacks  that (bad) scientists have perfected over the years, such as p-hacking  (manipulating the statistics to get a suitable p-value), fishing (playing with the statistics until some superficially nice-looking result appears, whatever it might be), or simply continuing to run the experiment  until you get enough data to support some desired result. This is all good: after all, what’s the point of putting in the effort unless you can show success? Nobody ever learnt from getting anything wrong. And that’s a [statistically significant] fact!
Part 3: Communicating About Your Amazing, Evidence-Based Results
We’ve made great progress so far. Using the foundations of western scientific methodology, we’ve been able to add real value at minimal effort. But there’s one more step. After you’ve assembled a supportive literature review and conducted your own empirical testing, it’s time to share your results. There are plenty of good guides to writing (bad) research articles, with excellent advice such as, “never explain the objectives of the paper in a single sentence…in particular never at the beginning”  and make sure to “use different terms for the same thing” . In addition to that great guidance, we’ll add two more suggestions:
- Build on personal experience (because feeling IS believing). People love personal stories, and we’ve all experienced education and training before – so, we’re all mini-experts on the subject of learning. Work with that. Use your own experiences or, even better, reference common human experiences as naturalistic evidence. After all, we’re all humans, and we all think and learn in the same ways. So, these common experiences will help people relate to your new L&D idea. Draw readers or customers in with anecdotes about personal experiences, and then generalise from those experiences to help explain and support your concept.
- Use snazzy terminology (because with a growth mindset, we can be neuro-informed): Like a well-tailored suit on a businessperson, certain words add polish that can make or break your L&D idea. At a minimum, make sure to use both ‘Machine Learning’ (ML) and ‘Artificial Intelligence’  (AI). (Don’t worry if you don’t actually use AI because a lot of so-called AI startups don’t either! ) Next, pick a few L&D terms that describe your idea or offering. Finally, include a few classic innovation words, like ‘emerging’ or ‘cutting-edge’, so that people know this is a new concept. Don’t worry if this seems like hard work: Sae has put together a table to help. Start with the following prompt, and then select a word from each column to fill it in:
Our concept uses AI/ML and [column 1], [column 2] [column 3] to optimise [column 4].
|Column 1||Column 2||Column 3||Column 4|
|emerging||cloud-enabled||serious games||bench strength|
|agile||mobile-first||master classes||cross skilling|
|bleeding edge||big data||expert seminars||design thinking|
|synchronous||extended reality||gamification||double-loop learning|
|net-centric||neuro-informed||blended systems||core competencies|
|context-aware||evidence-based||virtual classrooms||team workflow|
|right-sized||hybrid learning||experiential learning||your business ecosystem|
|higher-order||self-paced||instructional methods||learner empowerment|
To summarise: if you follow these 10 steps you should be well positioned to become a published learning scientist with a spate of innovative, evidence-based AI/ML concepts (among other more questionable descriptors) tied to your name.
Make beautiful data visualisations. Any data represented in an infographic is automatically more valid than a table. Ideally you should embellish your presentations with animations. To avoid confusion, eliminate distractions such as standard deviation notions or error bars, which just get in the way of a good story. Instead, opt for basic graphs wherever possible, like bar charts with just one or two items. You can, for example, make a dashboard of vanity metrics such as number of hours spent learning, smile-sheet scores, change in pre- to posttest results. Basically, anything that is countable can be included (so long as the numbers look right, of course). And use orange, because it’s a warm colour, and everyone loves a winner.