zerosleeps

Since 2010

I should have bid on the BoM’s new website

So after my attempt to defend The Bureau of Meteorology‘s new website, we’re being told that it cost $96.5 million.

An experienced, competent, efficient consultant in my industry bills at about $1,000 per day. Let’s be equally generous by assuming one of those consultants works 250 days per year. That means this thing took 386 overpaid-overworked-consultant-years?

That can’t be right.

“Ordered” to fix new website?

This a load of shit. The Bureau of Meteorology’s new website is gorgeous and delightfully functional, and it was in public beta for months. Presumably nobody whose bra is now in a twist bothered their arse to check it out or provide feedback when they had the opportunity. That alone earns the teams involved my full empathy, and that of every other developer who knows the frustration.

The federal government has asked the Bureau of Meteorology to fix its new website

It’s not broken.

Environment Minister Murray Watt says the Bureau needs to:

…adjust the website’s settings as soon as possible

Adjust the website’s “settings”? Super helpful feedback for those you’re directing it at, Murray.

And “Northern Victorian agronomist Malcolm Taylor” sounds like a real fucking wild one:

…I would expect some resignation notices over it

Over a website? Jesus.

Traffic stats

Back in February I declared that I wasn’t tracking visits to zerosleeps.com beyond what gets logged - and retained for only a short period of time - by my web server.

Something about that crawled around my brain for a while and in July I added a little bit of middleware to my Django stack to reliably log access requests. For every hit for a non-static resource I’ve been logging the datetime, requested path, the request’s User-Agent header, and the response’s HTTP status code:

class AccessLogMiddleware:
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        response = self.get_response(request)
        try:
            AccessLog(
                path=request.path,
                user_agent=request.headers.get("User-Agent", ""),
                status_code=response.status_code,
            ).save()
        except:
            # Don't care if logging fails, just serve the response
            pass
        return response

That is, until a few moments ago when I ripped it all out again. I just don’t care. I’ve never cared. I enjoy knowing that the nonsense I put on this thing is being viewed by fellow citizens of planet Earth, but I don’t need detailed logs to know that. I’m honestly not interested in which posts garner interest and which ones don’t. I do this for myself, which means I write about whatever I want. The numbers don’t change that.

Besides all of that the data just isn’t otherwise useful. The overwhelming majority of traffic is from self-identified bots (Bytespider, Amazonbot, SemrushBot, ClaudeBot, Thinkbot, ChatGPT-User, PetalBot, and developers.facebook.com are all in the top 20 user agents), and I’m not entirely sure what percentage of the rest of the traffic is from humans. There’s no way of knowing without drifting into creepy-tracking territory and even then I’d be super sceptical of the accuracy of any stats generated. That’s a hard pass from me anyway.

And the shit that these requests are for. About 20% of all requests have resulted in HTTP 404 responses, a very high percentage of those being for a path ending with “.php”. To the surprise of absolutely nobody who has ever put something on the internet the path at the top of the 404 list is… wp-login.php. Gross.

So it’s all gone. Web server logs as well.