What About Evidenced-Based Digital Maths Programs?
- 3 days ago
- 5 min read

Schools across Australia are incorporating appsdigital programs such as Matific and Mathletics into their numeracy and mathematics curriculums, for both classroom activities and homework. Use of such educational technology spiked during Covid-19 lockdowns when there were few other teaching options and the immediacy of school closures didn’t allow for careful evaluation before being rolled out. However, five years later, routine use of these platforms and apps continues. So here we pause and critically consider digital maths programs as an educational tool for our youngest student minds.
Executive Summary:
Review of existing research shows that digital numeracy and mathematics programs do not support learning any better than, or as well as, traditional teacher-lead lessons and activities.
With such programs, correct responses becomes the focus, goal, and outcome measured rather than reasoning and deeper understanding of the underlying concepts.
With such knowledge, why are online maths and numeracy apps replacing or supplementing traditional teacher-led instruction that we know works best?
The Evidence-base:
Reliable research needs to be independent, peer-reviewed, and replicated. That is, a) designed and undertaken by researchers with no connection to the product, b) published in peer-reviewed journals to ensure critical review of the design, method, and interpretation of results, and c) replicated to increased confidence in the findings. We’ve reviewed the research evidence base for two of the most commonly used platforms below:
Matific:
Matific lists the following research on its website:
“34% boost in test scores in just one term” The associated study is not a peer-reviewed published paper. It did not include a control group i.e. they did not compare the outcomes to students who continued with traditional teaching instead of Matific. Therefore, It is entirely possible that this 34% boost was the same or lower than the students who did not swap to Matific.
“Three Months of Extra Learning in One School Year” This study suggests some benefit to students. However, the research team at Johns Hopkin’s Evidence for ESSA reviewed such evidence and concluded overall for Matific that “no studies met inclusion requirements.”
“Large Effect Sizes in Maths Achievement” This study is again not a peer-reviewed published paper. It used a large sample and found no significant difference in mathematics achievement between users of Matific and the control group. The large effect size mentioned was when they looked only at the students who had high usage of Matific. This would be expected, students who are spending more time doing maths would do better. The additional time is the factor of difference here, not the mode.
“Early Learners Show Significant Gains in only Four Weeks” This study had not control group, so again no way to compare it to students who continued with transitional learning. It was also a very small convenience sample of 20 students which is not representative and allows for possible sample bias.
Mathletics:
The evidence of learning outcomes that Mathletics mentions on its website are studies conducted by Learn Platform “a third-party Ed-Tech research company.” Mathletics most commonly cites findings from a 2024 study which was “prepared for 3P Learning” – the owner of Mathletics. This is not independent research.
This study’s control group was labelled “non-users” without description of what this control group did instead of Mathletics. Therefore, it is unclear what the Mathletics group is being comparing to.
There are no peer-reviewed journal published studies evidencing superior outcomes associated with using Mathletics.
Maths-focused apps, overall:
A 2017 meta-analysis pooled together data from the 36 existing studies and found that apps and online programs were most helpful only for ‘constrained’ maths skills – those that are easier to teach and mastered with ease by the majority of children, such as counting, sorting shapes, and simple addition. Beyond this, the usefulness of such apps was low.

What Neuroscience Tells Us About How Brains Learn When Using Maths Apps:
Correctness is prioritised over understanding:
Maths apps most commonly provide instant feedback that simply indicates whether an answer is correct, with minimal or no explanation or guidance. This encourages students to concentrate on answers, not reasoning thus not learning the underlying concepts behind the maths (e.g. understanding of why a method works or how to apply it in new contexts).
Immediate feedback activates fast “habit” systems in the brain, not deep learning systems:
Neuroscience research demonstrates that different brain systems are engaged depending on feedback timing (e.g. Foerde et al., 2006, 2011).
Immediate feedback primarily activates the striatal dopamine system, which supports:
Rapid trial-and-error learning.
Habit formation.
Short-term performance gains.
Delayed or reflective feedback engages the hippocampus, which supports:
Memory consolidation.
Integration of ideas.
Long-term, flexible learning.
Instant correction may help students get answers right quickly, but it is less likely to support durable understanding, transfer, or problem-solving. When platform websites state “improves test scores?” yes, students may be able to pick the correct answer, but this is not the same as deeper conceptual learning. This also explains why these digital tools may improve short-term performance data without producing meaningful improvements in mathematical understanding.
Productive struggle is important
Learning maths often requires students to hold information in working memory, test ideas, reflect on errors, and revise strategies. However, getting instance feedback can short-cut their process. Students don’t get the chance to shift into active problem solving if the feedback is given instantly, and they end up simply updating their responses based on the feedback. This also often leads to guessing and trial-and-error strategies which do not deepen their conceptual learning and resilience.
Summary:
Digital numeracy and mathematics platforms and apps are widely used in Australian schools. However, current research does not show that these programs improve mathematical learning more than traditional teacher-led instruction.
Much of the evidence cited by these platforms is not independent, peer-reviewed, or based on proper experimental design. Studies show no significant difference in achievement between students who use the apps and those who learn through traditional methods. Research also suggests that maths apps mainly support basic, easily mastered skills rather than deeper conceptual understanding.
Neuroscience indicates that the instant feedback common in these digital programs encourages trial-and-error learning and habit formation rather than reflective thinking and deeper long-term understanding. While maths apps can motivate students and provide convenient practice, they should not reduce explicit teaching and conceptual mathematical reasoning.
Arguments for a “balanced approach” suggest students get a balance of optimal learning experiences and sub-optimal. When we know that deep conceptual and transferrable mathematical knowledge and understanding is best gained from direct engagement with teachers, why are we looking to replace or even supplement this at all?
Key comparisons:
Maths apps: | Traditional teaching: |
|
|
Keen to learn more?
Grab a copy of Dr Jared Cooney Horvath's new book 'The Digital Delusion' or watch some of his informative short videos.



Comments