EEA websites are audited daily and receive a score between 0 and 100 for each of the following metrics:
- Performance - evaluated using Lighthouse in Chrome Developer Tools
- Performance - evaluated using Lighthouse in Chrome Developer Tools
- Progressive Web App - evaluated using Lighthouse in Chrome Developer Tools
- Accessibility - evaluated using Lighthouse in Chrome Developer Tools (Audits)
- Best Practice - evaluated using evaluated using Lighthouse in Chrome Developer Tools (Audits)
- SEO Score - evaluated using evaluated using Lighthouse in Chrome Developer Tools (Audits)
- Links Integrity - the amount of broken links in websites is evaluated using linkchecker, which is run locally in a Docker container
- Encryption (TLS) - certificates validity and configuration are evaluated using the SSL test tool at https://www.ssllabs.com/ssltest/
- SecurityHeaders.com - Checks for missing HTTP security headers in website responses using the tool at https://securityheaders.com/. This is somewhat outdated and mostly superseded by the Mozilla security headers metric.
- Securityheaders(mozilla) - also checks for missing HTTP security headers in website responses, but uses the tool at https://observatory.mozilla.org/
- webbkoll.dataskydd.net - evaluated using Webbkoll, an online tool that simulates a normal browser visit and checks for potential privacy problems.
- Browsertime provides an alternative way of evaluating website performance.
- Checkmk provides a score based on the data from checkmk server regarding downtimes.
- Uptime (30 days) - this metrics gives 100 if the site is never found down/unresponsive within the checks performed under the last 30 days period. Uptime data gathered using https://uptimerobot.com/
- Server Errors (7 days) per visit - aggregation of the number of server-side errors/exception received in Sentry from the website, divided by the total number of website visits as logged by Matomo.
- JS Errors (7 days) per visit - aggregation of the number of JS errors/exceptions received in Sentry, divided by the number of website visits as logged by Matomo.
- Test coverage - code quality metric that is computed based on the results of tests ran in Jenkins using the SonarQube scanner. Coverage is computed from all the SonarQube Projects that have the url as a tag and are ending with '-master'. The score is the division between the sum of all covered lines and the sum of total lines.
- Bugs - code quality metric that is computed based on the results of tests ran in Jenkins using the SonarQube scanner. Bugs are counted from all the SonarQube Projects that have the url as a tag and are ending with '-master'. The score is 0 if at least one bug has grade E (Blocker), 25 if at least one bug has grade D (Critical), 50 for C (Major), and 75 for B (Minor).
- Vulnerabilities - code quality metric that is computed based on the results of tests ran in Jenkins using the SonarQube scanner. Vulnerability score is computed from all the SonarQube Projects that have the url as a tag and are ending with '-master'. The score is 0 if at least one grade is E (Blocker), 25 if at least one grade is D (Critical), 50 for C (Major), and 75 for B (Minor).
- Code smells - code quality metric is computed based on the results of tests ran in Jenkins using the SonarQube scanner. Code smells score is computed from all the SonarQube Projects that have the url as a tag and are ending with '-master'. The score is 0 if at least one grade is E (Blocker), 25 if at least one grade is D (Critical), 50 for C (Major), and 75 for B (Minor).
- Duplication score - code quality metric that is computed based on the results of tests ran in Jenkins using the SonarQube scanner. Duplication is computed from all the SonarQube Projects that have the url as a tag and are ending with '-master'. The score is the substraction from 100 of the division between the sum of all duplicated lines and the sum of total lines.
For more info please visit the quality metrics wiki page.