Making something pretty is usually the job of a designer. In Unreal terms that would be your artist, making 3d models. Their common tools are HTML, CSS and Photoshop or near equivalent.
Then you’ve got your back-end developer. This is the guy who’s responsible for everything behind the scenes, like page loads, database, login, payment, security - every functional aspect of what makes a website useful. This would be your C++ programmer.
Joomla has nothing to do with the appearance - that’ll either be custom to the theme, or custom to the specific plugin… often specific to the exact implementation. Joomla is primarily a CMS (content management system) whose front-end is pretty basic, and is mostly centered around the back end with heavy modularisation. Think of it like Wordpress, and plugins. Or Unreal Engine, and blueprinted assets.
People say web development is easy because there’s a lot of stuff you can do in isolation, with a low entry fee, and a low chance of failure.
You can vomit out an HTML page with some “not looking too crappy” CSS in minutes. HTML was designed by Tim Burners Lee with ease of data sharing in mind, so that makes sense.
You can download frameworks and setup very feature-rich websites pretty quickly. With some small knowledge, you can tweak their design and layout and customise certain elements. A little knowledge goes a long way.
One of the most popular programming languages for web development - PHP - also lets you get away with a lot of bad coding practices. This is partially by design; it was invented as a language intended to be easy to learn, use and digest.
This is where the problem arises; A little knowledge goes a long way, and it’s very easy to make something that works, but is actually a ****-poor solution. Often the flaws won’t get exposed until a malicious user attacks your site, and horrifying things happen which cause you to lose your job/company. Unfortunately people are prone to take the least-effort solution, and it takes a knowledgeable developer to show people the right way to do things, and often remind them that just because you can do something one way, doesn’t mean you should.
Well no, like any company or individual system on the internet, they had an exposed endpoint which was vulnerable to an attack, and someone took advantage of it. Interestingly enough, malicious users - especially ones making targetted attacks - rarely announce their presence when they’ve made access to your server. If you’re lucky, you’ve got some kind of monitoring which can pick up on such a thing. If that doesn’t happen, then usually the group will let you know that they stole 300,000 personal details because they want their street cred. Non-malicious (data harvesting, or government) attacks are very, very hard to pick up on, precisely because they don’t want to be detected.
The situation would have been exacerbated by the fact it’s a government entity - regardless of the data stolen, governments are notorious for putting up with problems and not spending the money to fix risks, having endless red tape and hoops that need to be jumped through before things get fixed. Chances are that their systems used old operating systems, using old software, because keeping them up to date was either a massive inconvenience, or the code they ran was incompatible with more recent versions.
This isn’t a trait restricted to governments, either - I’ve worked for companies who have had gaping security issues pointed out to them but had them considered trivial low priority items because fixing them would require time, effort, and dealing with the issue isn’t a revenue-generating task. It’s not until these flaws are exposed in a fatal manner that they get fixed in a mad panic. Richard Fyneman famously point out in the Rogers Commission Report on the Challenger disaster; engineers identify critical risks in a quantitative scientific manner, while the further you progress up the chain of management, the further reduced the perceived risk becomes.
Only in recent years has DevOps really been recognized as a true web role. Maintaining your servers, security and up-time is the primary aspect of this role.
The old methodology used to require someone manually setting up a server, and if you were lucky you could clone it as a virtual machine. Updates would be applied to each box individually during a “quiet period”. Thanks to the likes of Google, Facebook and Amazon, we now have systems where I can literally type one command and a multi-server cluster will be created in the cloud to my specifications. This can load my web-app code all by itself, and I can have a cluster of servers running in minutes. If a security alert comes out, I just update a single service container (like, the webserver from v2.1 to v2.2) and press my button again. A brand new cluster is created, and the old cluster is decommissioned minutes later.
Being complacent often leads to failure. Diligence is expensive. “Simple” websites are to enterprise cloud-service websites what driving a golf cart around a golf course is to rally driving.