Bear in mind that the UK has a “national speed limit” of 60mph for much of the countryside. This is very much a limit, a maximum, and you’re expected to drive to the conditions of the road. If it’s perfect weather conditions and twisting roads not wide enough for 2 cars, you shouldn’t be driving at the speed limit.
Absolutely. The legal speed limit is 60 in the country - on any road not marked with a lower speed limit. This means that legally, you can drive at 60mph down a twisty single track road with 1.5m earth and rock banks topped with hedges.
You would be an irresponsible nutter with a death wish to try through! And if you crashed, "I was driving at / under the speed limit" wouldn't wash - you would be charged with Driving without Due Care and Attention, or Dangerous Driving depending on the consequences of the crash.
Driving too fast for the conditions (but within the limit) would usually be considered Driving without Due Care and Attention even if you don't crash (although the likelihood of anyone being around to enforce it on a deserted country road is pretty low).
That's not the purpose of that law. That's just the pretext they use to get the useful idiots to endorse it. The purpose of that laws is if you do something stupid but below the speed limit and not violating any other specific laws they've got something to nab you for.
Having driven in the US and UK, this is a significant difference between the two. In the UK, you might sometimes drive 30 under on a road that is nominally 60 mph. In the US, that road would have a specific posted speed limit that is safe to drive. US roads are also more consistently designed for constant speed or have additional advisory speed limits for curves. You can nearly always drive as fast as the number on the sign unless there is some additional hazard.
I’m pretty sure UK security services can link bank accounts to you already. Hell, credit score companies like Experian already do it.
This is beneficial for public services and companies that need to identify you. Having a single ID for a person is a huge improvement. As an example, when we got married, my wife was simultaneously both her maiden name and my surname. There is absolutely no link between passport, birth certificate, driving license, etc.
You have to go around to all these different organisations and have them update the details. They all have different requirements for updating the name.
Having a single consistent mechanism for referring to a person in systems seems hugely beneficial for both the organisation and the person.
Having adopted a number of development tools, including Jira and Confluence, it’s amazing people let them sit there chugging away on underpowered machines with hundreds of users quietly complaining about the speed. Throwing some extra CPU cores and memory is so cheap for the quality of life improvement, let alone the productivity gain.
The concurrent (human) user counts at even large companies is probably a couple dozen at most.
Usually with these tools, the performance problems magically vanish if you disable all the integrations people have set up. My company is constantly denial of service attacking Jira with Github updates, for example.
I delivered a complex, highly customized enterprise back-office system for a large Fortune 500 some time back. It involved a handful of servers (all as VM's), x3 to accommodate DEV/QA/PROD staging.
It worked great in volume testing in our environment. Their IT department installed it on high end servers (hundreds of cores, incredibly expensive storage subsystems, etc) but users complained of latency, random slowness, etc. IT spent weeks investigating and swore up and down it wasn't their end and must be a software issue. We replicated and completely sanitized production volumes of data to try and recreate locally and couldn't.
Finally I flew down and hosted their entire infrastructure off my laptop for a day (I'll skip all the security safeguards, contract assurances, secure wipes, etc). It flew like a thoroughbread at a racetrack. No latency, instant responsiveness, no timeouts, no hiccups. Their entire staff raved about the difference. The results gave the business unit VP what she needed to bypass the usual, convoluted channels, and someone must have lit a fire under their IT VP - by the end of that day their internal techs identified a misconfiguration on their storage arrays and solved the problem. I can only guess how many other apps were silently suffering for weeks or months on the same array. I joked I'd be happy to sell them a laptop or two for a fraction of their mainframe cost.
I had the experience for a few years of having to run all of the self-hosted development and project management tooling for a government project about a decade back, and the integrations part holds up strong to that experience. The CI system that had been put in place was probably the most sophisticated I've ever seen, but that had some unfortunate side effects like Jenkins jobs being kicked off automatically thousands of times an hour, blasting all of the Atlassian tools with network requests, or Nessus remote logging into and spawning 40,000 simultaneous processes on the servers actually hosting the Atlassian tools.
People complaining about JIRA has become enough of a trope that it mostly gets ignored.
Also big enough corps give underpowered machines to the mass of employees (anyone not a dev, designer or lead of something) so latency is just life to them.
Obviously a very different view, but public transport in London (UK) is very much seen as faster than driving. The only reason someone would choose to drive is if they needed to transport something difficult to take on public transport.
I never understood the value of directory mapping when we used Perforce. It only seemed to add complexity when one team checked out code in different hierarchies and then some builds worked, some didn’t. Git was wonderful for having a simple layout.
I'm in exactly this situation with Perforce today, and I still hate it. The same problem OP described applies - you need to know which exact directories to check out to build, run tests etc successfully. You end up with wikis filled with obscure lists of mappings, many of them outdated, some still working but including a lot of cruft because people just copy it around. Sometimes the required directories change over time and your existing workspaces just stop working.
But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.
It might be possible to run an ACME client on another host in your environment. (IMHO, the DNS-01 challenge is very useful for this.) Then you can (probably) transfer the cert+key to BIG IP, and activate it, via the REST API.
I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.
F5 sells expensive boxes intended for larger installations where you can afford not to do ACME in the external facing systems.
Giving the TLS endpoint itself the authority to manage certificates kind of weakens the usefulness of rotating certificates in the first place. You probably don't let your external facing authoritative DNS servers near zone key material, so there's no reason to let the external load balancers rotate certificates.
Where I have used F5 there was never any problem letting the backend configuration system do the rotation and upload of certificates together with every other piece of configuration that is needed for day to day operations.
Exactly. According to posters here you should just throw them away and buy hardware from a vendor who does. >sigh<
Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.
Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.
I've never considered ownership in an IaC repo down to the individual resource and I'm struggling to see the usecase.
We also use tags/labels to link the generated "thing" back to the repository that created it with:
- The repo URL
- The pipeline URL
- The commit hash (also retrievable from the pipeline details)
These are all discovered via GitLab CI variables [1].
From this we would use the Git repository to identify ownership.
We have the benefit of our Infosec team having wide access to our GitLab instance, which might hamper other companies.
How would you handle a situation where someone creates a resource but then leaves?
The good thing about looking at an entire repository is that it gives you the entire history and who else might have worked on it. In hierarchical Git providers (eg. GitLab), it can also indicate where the project sits relative to others. If you just have a single person, you may struggle to find out who now owns a resource.