Best poster awards are one of the most valued traditions at medical and scientific congresses. They give early-career researchers a moment in the spotlight, motivate higher-quality submissions, and add a competitive edge that keeps attendees engaged beyond the main programme. But behind the scenes, organising a poster competition is rarely as elegant as it looks.
If you have ever coordinated a Best Poster Award at a medical congress, you know the drill. Printed scoring sheets. Clipboards passed around the room. Judges who leave early. Illegible handwriting. Someone has to collect every sheet, add up the numbers manually, and then hope they did not make a mistake. By the time the winner is announced, the organising team has spent hours doing work that has nothing to do with science.
Data Comparison
Paper Ballots vs. Digital Judging
Estimated time and risk across a typical 20-poster judging session with 5 judges.
Time per stage (minutes)
Total: Paper ~210 minutes | Digital ~16 minutes
Risk exposure (0 = none, 10 = critical)
How likely each failure mode is to affect your results.
Estimates based on a 20-poster session with 5 judges scoring 3 criteria each. Time includes coordination, travel between posters, and administrative overhead. Risk scores are qualitative assessments based on commonly reported issues at academic poster sessions.
Why Poster Judging Is Harder Than It Looks
The logistical challenges of running a poster award are often underestimated. Most congresses deal with a combination of the following problems.
- Judges cannot be in multiple places at once, which creates coverage gaps when posters are clustered by session or topic.
- Paper forms get lost, mixed up, or returned incomplete.
- Scoring criteria are inconsistently applied across judges when there is no structured interface guiding their evaluation.
- Tallying results is done manually under time pressure, usually right before the awards ceremony.
- There is no audit trail. If someone disputes a score, there is nothing to check.
These are not edge cases. They happen at small departmental meetings and at international congresses with thousands of attendees alike. The size of the event changes the scale of the problem, not the nature of it.
What a Good Poster Judging System Actually Needs
Before looking at solutions, it helps to be clear about what good poster judging actually requires. The system you use needs to do several things well.
First, it needs to be accessible. Judges at medical congresses are typically busy clinicians or researchers attending the event in a professional capacity. They are not going to download a new app or create a new account to score a few posters. If the barrier to entry is too high, engagement drops and scores are incomplete.
Second, it needs to support structured multi-criteria scoring. A single overall score tells you very little. A well-designed judging system allows you to assess posters across multiple dimensions, such as scientific quality, clarity of presentation, originality, and clinical relevance. This produces a more defensible and meaningful result.
Third, it needs to give organisers visibility in real time. You should be able to see who has scored what, how many posters have been evaluated, and where the gaps are, without chasing people down the corridor.

The Case for Digital Poster Judging
Digital judging tools designed specifically for scientific events address all of the above. The key is choosing something that was built for this context, rather than a generic survey tool or spreadsheet workaround.
The right tool lets judges access their scoring form via a QR code, submit scores directly from their phone or tablet without any login or installation, and move through their assigned posters at their own pace. For organisers, the dashboard updates in real time: scores come in, rankings shift, and the final results are ready the moment the last judge submits.
The difference in workload is significant. What used to take two hours of manual data entry can be reduced to a few seconds of exporting results. And the data is clean, timestamped, and reproducible.
At the Life Science PhD Meeting 2026 in Innsbruck, organisers used InstaJudge for the first time to manage both the poster and short talk judging. They reported that monitoring evaluations in real time and identifying winners for the Best Poster and Best Short Talk awards became "remarkably efficient".
Setting Up a Best Poster Award: A Practical Guide
Whether you are running a 50-poster session at a departmental research day or a 500-poster competition at a national medical conference, the core process is the same. Here is how to approach it.
1. Define your scoring criteria before anything else
This is the most important step and it is often skipped. Generic criteria like "overall quality" produce inconsistent results across judges. Instead, define three to five specific dimensions that reflect what your organisation values. Scientific rigour, methodological clarity, novelty of findings, visual design, and the presenter's ability to explain their work are all strong options. Each criterion should be scored on a defined scale, and judges should know what each score level means.
2. Select and brief your judges in advance
Judges should receive their assignments before the day of the poster session, not when they arrive. Let them know which posters they are evaluating, what criteria they will use, and how to access the scoring system. A five-minute briefing at the start of the session is worth doing even when everything is clear on paper. It reduces errors and sets expectations.
3. Use QR codes to distribute access
If you are using a digital judging platform, QR codes are the most reliable distribution method. Print each judge's unique code on their name badge or on a card in their conference bag. When they are ready to score, they scan and go. No email hunting, no login screens.
4. Monitor completion during the session
With a real-time dashboard, you can see exactly where progress stands at any point during the judging period. If a judge has not started, you can approach them personally. If a poster has not been evaluated by anyone, you can flag it. This kind of visibility is not possible with paper-based systems and it materially improves the completeness of your data.
5. Export results and announce with confidence
When judging closes, your results are ready immediately. No manual tallying, no re-checking formulas in a spreadsheet. You can sort by average score, filter by category, and identify your top-ranked posters in seconds. The winning announcement becomes a moment to celebrate, not a stressful scramble.

Common Mistakes to Avoid
A few pitfalls come up repeatedly in poster award organisation, regardless of the size of the event.
- Assigning too many posters per judge. A realistic number depends on the depth of scoring, but anything above 8 to 10 posters per judge risks fatigue and declining quality of evaluation.
- Leaving the judging window too short. If the session runs from 14:00 to 16:00, judges need enough time to visit, read, ask questions, and score. Overlapping that window with a keynote or lunch creates attendance problems.
- Not accounting for no-shows. Always have one or two backup judges identified. People cancel at the last minute and a gap in coverage can affect the validity of the results.
- Announcing criteria only on the day. Judges who understand what they are scoring in advance produce more consistent and thoughtful evaluations.
- Using a generic survey tool not designed for judging. Tools like Google Forms or SurveyMonkey can work in a pinch, but they do not support real-time monitoring, judge-specific assignments, or automatic result aggregation.
What Makes a Poster Competition Worth Attending
Beyond the logistics, it is worth remembering why poster competitions matter. For PhD students and early-career researchers, a Best Poster Award is often one of the first formal recognitions they receive in their career. The quality of the judging process reflects the seriousness with which your organisation treats that recognition.
When the process is transparent, structured, and professionally run, it adds real credibility to the award. When it is rushed or disorganised, even the deserving winner may feel the recognition was arbitrary.
The tools available today make it straightforward to run a poster competition that is both efficient for organisers and fair for participants. The question is no longer whether you can do it well. It is whether you are willing to leave the clipboards at home.
