Only having basic analytics
High level dashboards that show pageviews and sessions, but cannot answer concrete questions about specific events or traffic bursts.
If your work is serious enough to upset someone with power, it is serious enough to need logs that can stand up to questions. Guesswork and vague analytics are not good enough.
I design and configure logging for public interest, advocacy, and watchdog sites so you can show what actually happened on your platform, when, and from where, within lawful limits.
For a deeper explanation of how lawful fingerprinting supports this, with redacted real world examples, see Fingerprinting and Edge Tracker.
Evidence grade logging is not for vanity metrics. It is for people who expect questions from organisations, regulators, or oversight bodies.
Independent sites that publish investigations or lived experience accounts about public services, justice, or health systems.
Groups that track how councils, housing providers, or contractors behave over time, and who may need to show patterns of attention.
Platforms that receive sensitive disclosures and might need to demonstrate how those routes were accessed and monitored.
Charities and campaigns that want a clearer view of who is paying attention to sensitive pages beyond supporter stats.
If you might one day be asked to show how your site was accessed, by whom in broad terms, and in what pattern, you are in the right place.
Many projects only discover their logging gap when someone has already challenged them, denied something, or claimed harassment.
High level dashboards that show pageviews and sessions, but cannot answer concrete questions about specific events or traffic bursts.
Raw numbers that mix crawlers, scrapers, and legitimate visitors, which makes it hard to argue for what actually happened.
Hosting level logs saved somewhere nobody knows how to access, read, or export when an incident happens or a complaint arrives.
Blocks, challenges, and unusual hits are handled ad hoc, with no repeatable pattern or simple narrative that can be shared later.
You can see that traffic is up, but not that a cluster of visits came from a particular ASN or set of networks with obvious interests.
Trustees, funders, regulators, or ombudsmen ask what happened, and the answer relies on memory and screenshots from a phone.
Evidence grade logging solves for these problems by setting up clear sources, clear exports, and a clear way to tell the story of what has been happening at the edge of your site.
It is not about infinite detail. It is about the right level of detail, consistently captured, so you can answer reasonable questions later.
Not by name, but by network and pattern. For example, whether a particular authority, contractor, or company network repeatedly accessed certain pages.
Whether a spike in requests or challenges correlates with a story going live, an email being sent, or a complaint being raised elsewhere.
Whether Cloudflare rate limiting, rules, or fingerprinting challenged or blocked particular patterns in a way you can explain.
The point is not to chase every hit. It is to be in a position where, when someone queries your account of events, you have more than a vague sense that something happened.
Evidence grade logging is not about tracking individuals around the internet. It is about knowing what happened on your own site in a way that lines up with law and common sense.
That means collecting what you need, for a clear purpose, and for a sensible length of time. It means being able to explain your setup in plain language to trustees, regulators, or a court if required.
I work on the assumption that your logs might one day be read by people who do not care about your cause. They still need to make sense.
Evidence grade logging works best alongside hardened static builds, sensible bot mitigation, and clear boundaries about what you do and do not promise site visitors.
If I am handling your broader build as well, logging is part of the overall architecture, not an afterthought. If I am just handling logs, I work around what you already have and tell you honestly where that creates limits.
For related work, see Secure static sites and Bot mitigation for public interest sites.
Logging is powerful. It needs clear limits so it stays lawful, proportionate, and defensible.
The wider position is set out in the Cookies, analytics, and fingerprinting policy and the Neutral infrastructure policy.
If your site is unlikely to attract scrutiny, analytics might be enough. If you publish sensitive work, deal with whistleblowing, or expect institutional interest, you will want logs that can answer specific questions later.
Yes, when set up with a clear purpose, limited scope, appropriate retention, and proper transparency. Part of my job is to keep the technical side aligned with those principles. You still need your own legal advice and policies.
No. Logging and fingerprinting show devices, sessions, networks, and patterns on your own site. They do not attach real world names or personal identities.
Yes. Part of this offer is helping you translate raw logs into a simple narrative that can be understood by trustees, regulators, or oversight bodies, without exaggeration or spin.
It depends on your work, risk, and legal position. I can suggest practical ranges and trade offs. Final decisions should be taken with your governance and legal leads, then reflected in your policies.
Often yes. I can review what you already have, tighten Cloudflare, configure logging, and tell you where your current architecture creates limits or blind spots.
Tell me what kind of work you do, who might one day question it, and what logging you have right now. I will tell you plainly what is worth improving and how heavy that work is likely to be.