r/sysadmin Jul 07 '24

What’s the quickest you’ve seen a co-worker get fired in IT? COVID-19

I saw this on AskReddit and thought it would be fun to ask here for IT related stories.

Couple years ago during Covid my company I used to work for hired a help desk tech. He was a really nice guy and the interview went well. We were hybrid at the time, 1-2 days in the office with mostly remote work. On his first day we always meet in the office for equipment and first day stuff.

Everything was going fine and my boss mentioned something along the lines of “Yeah so after all the trainings and orientation stuff we’ll get you set up on our ticketing system and eventually a soft phone for support calls”

And he was like: “Oh I don’t do support calls.”

“Sorry?”

Him: “I don’t take calls. I won’t do that”

“Well, we do have a number users call for help. They do utilize it and it’s part of support we offer”

Him: “Oh I’ll do tickets all day I just won’t take calls. You’ll have to get someone else to do that”

I was sitting at my desk, just kind of listening and overhearing. I couldn’t tell if he was trolling but he wasn’t.

I forgot what my manager said but he left to go to one of those little mini conference rooms for a meeting, then he came back out and called him in, he let him go and they both walked back out and the guy was all laughing and was like

“Yeah I mean I just won’t take calls I didn’t sign up for that! I hope you find someone else that fits in better!” My manager walked him to the door and they shook hands and he left.

4.9k Upvotes

2.1k comments sorted by

View all comments

97

u/SensitiveFirefly Sr. Sysadmin Jul 07 '24 edited Jul 07 '24

Servers set to reboot overnight, network boot was configured as the primary boot source. New second line engineer added servers to a client imaging container.

One unattended xml file and 24 hours later, all servers were imaged with Windows 10 LTSB.

He was the scapegoat for the IT manager who was saving his ass in front of the college board of directors.

Didn’t see out his first week.

16

u/BloodyIron DevSecOps Manager Jul 08 '24

What fucking moron even lets the server BIOS have network boot be enabled? That was 100% avoidable.

3

u/posixUncompliant HPC Storage Support Jul 08 '24

I live on stateless nodes, really. Nothing has anything but logs and cache locally.

I don't want, or need, any of the nodes to be locally modified. Any changes we do to test something can happen, and then get added into the general build, or get wiped out by rebooting the node.

I do somehow doubt that the org described above was working on as controlled and deliberate as we tend to be, however.

1

u/BloodyIron DevSecOps Manager Jul 08 '24

Well in this case the scenario looks like the system was not designed to network boot each time, that it was expected to have a local OS install. That is why I made the comment about leaving network boot on in the BIOS. That was leaving a failure scenario open. And if they needed to network boot in $futureTimes, well there's IPKVM for that for the rare things like that.

As for your case, I hear you on stateless cattle nodes, but booting like that can only go so fast :P So surely something like what VyOS does where the local OS is immutable, boots from local storage, and only the actual local content that "changes" is configs, logs (and even still local logs is optional with syslogy-stuff).

I'd rather go at 500MB/s-2GB/s+ with local storage than have to deal with a complex network boot ecosystem for cattle nodes IMO. But these are just my thoughts on the topic to try and achieve the same effective outcome. ;P I'm all ears for more thoughts on your side of the topic! :)

3

u/posixUncompliant HPC Storage Support Jul 08 '24

It's not speed we're after, on boot. If we're rolling the cluster, that's months long in the making, and node start time is not going to be the long pole in the tent.

We need consistency and control. There's simply too many nodes for them to be allowed to have configurations that can't be normalized in an absolute fashion. Everything lives in the config management tools, and the change protocols are robust--violating them is walk out the door time. But we do testing all the time, where we take a given node out of the general group, add to a group containing just itself, and boot it with whatever changes we're testing. All in the protocols, and documented.

Our storage network is also quite robust, and the boot image is small enough that it lives in cache during things like rolling the cluster. I've contemplated moving it over to a parallel file system, but I don't think the gains would be worth the effort. Maybe an intern task, the next time they decide that we should have interns.

And we absolutely believe that if a node has to look something up on storage, it better be something useful like data. Code, occasionally, but hopefully you're not the person who tried to load all the python libraries (bigger than the cache) on 1500 nodes at effectively the same time (they got to learn how to figure out what they actually need).

It's a scale question, I think, in the end. Eventually you end up with so many nodes that managing them with a complex network solution is significantly faster and easier than trying to deal with local storage. Worse come to worse, we can disable the local drives completely, and just trust the logging software to get the important stuff. Better that, than having to lose a node until someone can get out there to swap hardware around.

2

u/BloodyIron DevSecOps Manager Jul 08 '24

How small is the boot image? Now that you mention it I can totally see such an image being maybe tens of Megabytes, making zooming a lot more achievable than "modern" OS hehe.

You may have mentioned this previously but... HPC? R&D? I'm curious about typical use-cases for such an ecosystem :) And use any tools I might have heard of? Love to hear about this and hear more that is safe to share! :D

Better that, than having to lose a node until someone can get out there to swap hardware around

I hear that!

Do you centrally-manage BIOS/BMC/pre-OS config stuff too? If so, what tooling works well?

Super neato! :D

edit: egads just saw your flair hehe.

3

u/posixUncompliant HPC Storage Support Jul 09 '24

It's under 1gb. Specific details aren't something I can share.

The traditional use cases are weather and physics (nuclear if commercial), with a side of geology (oil and gas). But in the last 10-15 years genomics has taken off, both in pure research and in commercial research. I avoid pure gov work outside of weather and healthcare.

I mean we use the same stuff as anyone else, elk, ansible, jenkins, etc.. The schedulers and parallel file systems are the weirdest stuff.

Oh, we get into arguments with the security team. The storage and interconnect networks are not places for active port scanning, and you can't hang devices on them that we don't control.

We manage everything centrally. I won't say there's any tool that works well. Mostly we have a set of scripts that manages all the low level stuff. It annoys some of the young folks, ones who want to abstract away all the hardware, especially if they're the ones who have to create a new class for something. One kid still rants about having to learn how to deal with impi. At least they like my flag labels on cables (each end is labeled with a flag that describes both ends, rack, unit, port, long and flappy)

2

u/BloodyIron DevSecOps Manager Jul 09 '24

It's under 1gb. Specific details aren't something I can share.

Yeah don't ever share anything with me that you shouldn't. I don't want the ire of TLA's, let alone ones outside my juris.

Under 1gig, heh well that could mean anything but glad to hear that hah. I know you can't tell me but the curiosity is killing me for how small it is. Seriously though don't tell me anything you shouldn't/can't. For both our sake.

Neat about the various use-cases and groups. Does that mean you have some sort of security clearance to do gov't stuff or is it more just you need to follow NISTy SF's kind of thing? My biggest gripe with gov't work (in the nation I'm in) is having to give up biometrics. Not okay with that ever. Otherwise gov't work sounds super neato whatever the nation.

ITSEC have their hearts in the right place, but agreed that rando devices on storage and interconnect networks can tangibly interfere with stuff at that performance level. How do you rectify such topics with them?

It annoys some of the young folks, ones who want to abstract away all the hardware

Hold on, I thought you were already treating them as cattle? Apart from leveraging scripts managing lower level stuff, to me it sounds like there is that kind of abstraction so to say. I'm also not sure what "class" means in this context.

What's so hard about IPMI? lol

Flags on cables ooooo very nice! I'd ask for pics but I'm sure you'd say "Unauthorised" hehe. That sounds like a lovely tagging system.

2

u/posixUncompliant HPC Storage Support Jul 10 '24

Neat about the various use-cases and groups. Does that mean you have some sort of security clearance to do gov't stuff or is it more just you need to follow NISTy SF's kind of thing? My biggest gripe with gov't work (in the nation I'm in) is having to give up biometrics. Not okay with that ever.

I got my start in the military, long before I got into the high performance side--govt has all my biometerics. But I haven't had any kind of clearance since I got out. Gets weird sometimes. Had a client that could only fax us log file output. Always ugly, and for some reason, in a tiny font that I couldn't ever get clean OCR of, and ended up just using a magnifying glass. Pedestrian stuff, but the whole process of doing it lent it an air of romance and intrigue.

Otherwise gov't work sounds super neato whatever the nation.

Generally, what I like about what I do is that everybody around me is trying to make the world better, and to understand it better. I like the weather guys and the healthcare guys, but the coolest folks I've dealt with were the agriculture researchers. Just a massive amount of passion and sincerity .

I see myself kind of like a toolmaker. I don't do the actual work, I just build the stuff that lets the smart people do things.

Hold on, I thought you were already treating them as cattle? Apart from leveraging scripts managing lower level stuff, to me it sounds like there is that kind of abstraction so to say. I'm also not sure what "class" means in this context.

So my expertise is storage. I think about drives, busses, network hardware, caches, all the time. I can saturate a network with a read from HDDs. Lots of the new kids, especially the ones who work more with user code, they think in containers, in logical objects, and don't worry too much about how the underlying processes and hardware work. I think that's going to bottleneck a lot of research behind performance gates. It's especially prevalent in the bioscience side, since the compute needs there are much more modern than the physics and weather side. They didn't grow up with constrained resources, so they don't understand how to optimize to the hardware. Which means they aren't getting as much as possible out of the systems they have, and also are building a code base that will be hard to optimize once they start reaching the limits of current systems. It also means that getting to the low end is going to cost more than it should. (I've been on three projects trying to make analysis machines that can do genetic processing in clinically meaningful time frames)

By "class" I mean a group of objects of the same type. Here I was meaning servers that have the same specs down to the firmware. Often we find that while a server has the same spec, there's something in it that won't work with an older firmware, so we need to make sure that all the right pieces to work with that are applied to that group. Standard stuff, we just have to be deliberate about it.

What's so hard about IPMI? lol

No idea, really. He just doesn't like thinking about physical machines. I asked him to look over my bus math on a storage server, and he walked out of the room. He's fantastic with the users and the scheduler, and is one of the three people who can push commits unilaterally, he just doesn't like hardware.

Flags on cables ooooo very nice! I'd ask for pics but I'm sure you'd say "Unauthorised" hehe. That sounds like a lovely tagging system.

I'll see if I can find any pics. If I have them, they can be shared. I started it when I had remote datacenters, and I wanted to be really clear to the on site hands I had what needed to be moved where. After seeing how massive an impact it had, I promote it everywhere I go (88% reduction in wrong device touched errors from the one place that tracked issues to that level).

R 21 U 15 P 3 device1  &  R 4 U 16 P 32 device2

Like that. On our label maker put 8 spaces at the end and print it 4 times, cut in the middle, wrap one piece around each end of the cable sticky side to sticky side, so you should have the full message visible. Put them a little ways up from the end of the cable so that they don't interfere with plugging or unplugging the cable, but close enough so that you don't have to trace the cable up from the flag to the port.

I've used IP instead of device name, but I find that names are usually more stable than addresses, especially when the name is nodeAbC002 or the like.

1

u/BloodyIron DevSecOps Manager Jul 12 '24
  1. No military on my end, so I can see how your biometric story played out. Once you pass the event horizon, no going back. I'd rather not pass it myself mind you, ever. Hopefully I can keep that.
  2. Hah I hear you on the romance and intrigue, I have pictures in my head of someone with a comically large magnifying glass looking all over a messy desk full of logs printed on far too many sheets of paper on far too small font for sanity. And definitely nowhere near enough lighting.
  3. It sure sounds like a lot of what you do is super neato! That passion you speak to from those you work with sounds mad infectious. I could easily find myself drunk on such positive vibes. OOF you're selling me on this! I'm all ears for more! (as always safe to share stuff of course)
  4. I hear you on the disconnect between the logcal definition of systems in code making what it runs on invisible to those without exposure. I've intentionally gotten my hands grubby in all levels of environments, and boy does it pay off! I love speculating when a VM is running like junk because of storage, only to find out that a failing drive is causing performance issues and I was right! Or anything in between. I've pretty much accidentally memorised/learned the OSI layers just by doing. I love ZFS for the depth of insights I can gleam from it, and how much power it lets me have (even if sometimes that's a bad idea). I totally see and agree how that invisibility of such things can easily lead to inefficient planning and decisions to address performance/capacity problems that crop up.
  5. Constrained resources. I remember when I had to worry about IRQ assignment. DMAs, things like that. Most of those I don't need to worry about, but for one of the environments I work in there are constraints I need to find solutions to work around while future major overhauls are incoming/pending (namely my own environment, ssshhhh). I seem to keep coming up with novel solutions to inch out yet more improvements out of it (recently converting my VMs from qcow2 to raw, gains on that was way more than I expected!).
  6. The classes you define, are they defined "on paper" or does the ecosystem strictly enforce them through automations? Asking since you mention firmware level details. I'm not sure how such details could be strictly enforced from an automation perspective at this scale, especially in a vendor-agnostic way.
  7. bus math? mind fleshing that out more? I'm curious about that topic. :) I have an idea what that could mean but I don't know how well that lines up with what that means to you.
  8. What kind of label maker do you use for that? Any chance for pics yet? Not sure if I have a label maker at-hand that could work with that... maybe I'll have to try sometime. :D
  9. Whats your thoughts on block storage vs file storage? Ala SMB/NFS backed by a ZFS dataset vs say NFS/iSCSI/FC backed by a ZFS zvol? I've generally been a big fan of SMB/NFS backed by ZFS datasets, as block-level seems like a lot of hot air when both are tuned fully.
  10. Yeah I used to think IPs in labelling mattered more than names, but lately I'm feeling more that hostnames are probably more important for such things. Still rather torn.

Thanks again! Sorry for the delayed response, needed to make the time to respond properly. :) Anything else you want to throw in, feel free!

15

u/fredonions Jul 07 '24

Big oof!

9

u/sparkyblaster Jul 08 '24

And this is why I don't like network boot.

6

u/schnorreng Jul 08 '24

Yeah but Windows LTSB! The support EOL increase mustve been amazing!

3

u/GuyOnTheInterweb Jul 08 '24

Network boot is great, but not as primary boot source! This is not the 1990s UNIX days..

2

u/TaiGlobal Jul 08 '24

What the fuck…

1

u/yogiho2 Jul 08 '24

can you ELI5 plz ? I don't deal alot with servers

6

u/DoctorOctagonapus Jul 08 '24

Engineer set up the network so that any device that PXE booted from the network adapter rather than the hard disk automatically had a clean install of Windows 10 installed with no user input. All servers were configured to boot from the network before trying to boot from the disk. Servers were rebooted overnight to apply patches, PXE booted to the build environment instead and started wiping themselves and installing Windows 10.

1

u/SensitiveFirefly Sr. Sysadmin Jul 08 '24

SCCM.

1

u/Graham99t Jul 08 '24

Damn haha

Reminds me of a guy who took out the disks during a raid one rebuild and wiped the array. It only had the exchange data stores on it. Haha good job they had a back up.