Blog Development of the EEF’s Teaching and Learning Toolkit

The Teaching and Learning Toolkit and the data behind the summaries of evidence.

The Education Endowment Foundation (EEF), a long-standing client of ours, undertake the education achievement policy section of the What Works Network. In this blog post, we focus on the development of the Teaching and Learning Toolkit and the data behind the summaries of evidence.

We recently re-launched the EEF web application and version 3.0 of the Teaching and Learning Toolkit, one of the most popular education resources in the UK. We’ve been working on this development for over a year, and it is the most in demand resource on the content-rich EEF site.

Such an application requires a lot of thought and planning, with a development timeline that started over a year before launch and will continue in the years to come.


Version 1.0 of the Toolkit was launched in late 2012. In itself, the first version we developed was pioneering and paved the way for online evidence summaries – most evidence centres now have them and each one looks to the EEF as a benchmark of success. 

The Principle of least astonishment proposes that a component of a system should behave in a way that most users will expect it to behave – the behaviour should not astonish or surprise users. 

The EEF web application (system) already had a large user base and global awareness. The brief for version 3.0 of the Toolkit (component) wasn’t to completely reinvent or surprise users with something brand new, It was to fill the shortcomings of versions 1& 2 and continue to build on the previous successes of these major version releases. 

The EEF undertook a complete re-write on each of the 30 strands that make up the Toolkit, to offer a much greater insight into the different approaches schools can use to boost attainment outcomes of pupils. 

Armed with over a decade of toolkit development experience, intrinsic understanding of the traffic flow and provided with draft outlines of the content structure, our UI team were able to create a new, scalable user interface that is still recognised as the global leader in evidence Toolkits.

EEF now have a new content platform to allow their users to delve deeper into what makes up an evidence strand, and can be much more informed about how such an intervention could help in their specific environment.

Windows Sinlge window


It wasn’t simply a UI change. Previously, the data fed into the evidence summaries was internally collated. Whilst we didn’t get too involved in this, we know it was chaotic and challenging to manage at scale.

The EEF spent over 3 years entering research studies into a database that now contains over 2,500 individual studies.

At Percipio, we love nothing more than clean, manageable data – it allows us to do wonderful things to make data easily understandable to audiences.

Prior to the new government directives update for the 2021/22 school term, making it compulsory that pupil premium funding decisions are informed by evidence, the EEF made it clear that it was important users could drill down into the details behind the summary data.

For example, attainment effects for individualised instruction (providing different tasks for each learner and support at the individual level) tend to be different in literacy, numeracy or the sciences.

Previously, it would have been a manual exercise for users to research the listed studies that made up their chosen strand’s effect size. Using the new database of studies from the EEF, we now had usable data to build a dynamic system.

EEF Toolkit Reference panel

The end result is an interactive model users can use to specify how the overarching attainment impact can change if utilised in their specific environment. For example, the outcome boost for an individual strand in a primary mathematics setting may be vastly different from the outcome for the same strand in a secondary science setting – users can now discover this information.


Unfortunately, data doesn’t simply organise itself into dynamic charts and functions through some feat of technical wizardry. We host this output of the reference database on our Metaseed platform, turning the static data into a queryable API. Doing so allows the EEF web application to offload all of the technical computations of the Toolkit and the technical appendix to Metaseed, enabling its core functionality to remain as a content serving platform.

Windows Sinlge window copy

Why is this important? Retaining a separation between the large amounts of data and the associated complexities facilitates scalability.

The Toolkit already serves up over 1 million pages to users every year – and that’s just the UK version. The EEF have international partners all over the globe that also tap into the content.

Keeping the EEF web application as a content platform allows independent updating, performance changes, and monitoring – something that is essential for high-availability, zero-downtime situations.


At least 1 in 5 people in the UK have a long term illness, impairment or disability and many more have a temporary disability.

Accessibility means making sure content and design are clear and simple so that most people can use it without adapting it while supporting those who do.

Here at Percipio, we make no assumptions on the ability of users accessing the platforms we develop, and the EEF Teaching and Learning Toolkit is no exception.

Someone with impaired vision might use a screen reader, braille display or screen magnifier. Or someone with motor difficulties might use a particular mouse, speech recognition software or on-screen keyboard emulator. We’re proud to say that the Toolkit conforms to the highest digital accessibility standard available – WCAG 2.1 AA ensuring all users can access and consume the evidence summaries.


There is always someone accessing the EEF Teaching and Learning Toolkit – quite literally. Not a minute now goes by where Toolkit content isn’t being served up somewhere in the world.

The EEF user base is growing daily, and updates are sent to the full application on a weekly basis, 1 month after launching software version 3.0.0, we’re already on 3.1.3 at the time of writing.

All of this adds up to an environment that is constantly being developed and can not afford any downtime for updates. We have a devOps cycle in place, which we employ on all of our projects. Part of this continuous integration ensures we can deploy code to servers without causing temporary maintenance outages.

This is wonderful and somewhat of a development nirvana, but it does nothing for traffic spikes generated by newsletter mailouts, press releases or new government directives being published.

Version 3.0 now sees users being distributed to multiple servers based on resource load and server capacity. Essentially, we can expand or contract server capacity depending on how many users are accessing information.

Load balancing PINK section water mark 01

All of this adds up to a 100% scalable, robust platform that caters for the needs of version 3.0.

You can see it in action here: Teaching and Learning Toolkit

Windows Sinlge window copy 2

Next steps

For over a decade, we’ve been working on the EEF toolkit (amongst others), and innovation doesn’t stop here. We are fully aware that no platform is perfect – to that end, we constantly pour over user metrics, employ focus groups and use this feedback to build better systems.

Using evidence to inform platform decisions and constantly improving both our build systems and our client products, is paramount to the success of the web applications we develop.

There’s also talk of machine learning or artificial intelligence.…