Sysadmin
-
Ghost ports
In the middle of a live VLAN readressing of a 200-node company, I encountered this gem. The ports just kept blinking on even after plugging out the cables. (HP aruba 24 port switch)
One turned off after a reboot.
-
The new Dell naming scheme is out
Here it is:
- Dell Base
- Dell Plus
- Dell Premium
- Dell Pro Base
- Dell Pro Plus
- Dell Pro Premium
- Dell Pro Max Base
- Dell Pro Max Plus
- Dell Pro Max Premium
What a time to be alive
-
Ever struggled with screen tearing? Don't worry, some won't fix it on screens that make them money.
Video
Click to view this content.
Transcript
Vertical ad screen with Coca-Cola's Christmas ad featuring Santa's sleigh and a winding trail. The video runs at about 20 fps and there are obvious vertical tears in it. Also featuring rolling shutter and moiré artifacts not seen IRL.
-
How can I delete images that aren't posted by my users from pict-rs?
cross-posted from: https://gregtech.eu/post/5084911
> Essentially, I'd like to have pictrs delete all of the images that aren't uploaded by my users, because my storage usage was going through the roof, so I just disabled the proxying of images. Here is my config: >
> x-logging: &default-logging > driver: "json-file" > options: > max-size: "50m" > max-file: "4" > > services: > proxy: > image: docker.io/library/nginx > volumes: > - ./nginx_internal.conf:/etc/nginx/nginx.conf:ro,Z > - ./proxy_params:/etc/nginx/proxy_params:ro,Z > restart: always > logging: *default-logging > depends_on: > - pictrs > - lemmy-ui > labels: > - traefik.enable=true > - traefik.http.routers.http-lemmy.entryPoints=http > - traefik.http.routers.http-lemmy.rule=Host(`gregtech.eu`) > - traefik.http.middlewares.https_redirect.redirectscheme.scheme=https > - traefik.http.middlewares.https_redirect.redirectscheme.permanent=true > - traefik.http.routers.http-lemmy.middlewares=https_redirect > - traefik.http.routers.https-lemmy.entryPoints=https > - traefik.http.routers.https-lemmy.rule=Host(`gregtech.eu`) > - traefik.http.routers.https-lemmy.service=lemmy > - traefik.http.routers.https-lemmy.tls=true > - traefik.http.services.lemmy.loadbalancer.server.port=8536 > - traefik.http.routers.https-lemmy.tls.certResolver=le-ssl > > > lemmy: > image: dessalines/lemmy:0.19.8 > hostname: lemmy > restart: always > logging: *default-logging > volumes: > - ./lemmy.hjson:/config/config.hjson:Z > depends_on: > - postgres > - pictrs > networks: > - default > - database > > lemmy-ui: > image: ghcr.io/xyphyn/photon:latest > restart: always > logging: *default-logging > environment: > - PUBLIC_INSTANCE_URL=gregtech.eu > - PUBLIC_MIGRATE_COOKIE=true > # - PUBLIC_SSR_ENABLED=true > - PUBLIC_DEFAULT_FEED=All > - PUBLIC_DEFAULT_FEED_SORT=Hot > - PUBLIC_DEFAULT_COMMENT_SORT=Top > - PUBLIC_LOCK_TO_INSTANCE=false > > > > pictrs: > image: docker.io/asonix/pictrs:0.5 > # this needs to match the pictrs url in lemmy.hjson > hostname: pictrs > # we can set options to pictrs like this, here we set max. image size and forced format for conversion > # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp > #entrypoint: /sbin/tini -- /usr/local/bin/pict-rs run --max-file-count 10 --media-max-file-size 500 --media-retention-proxy 10d --media-retention-variants 10d filesystem sled -p /mnt > user: 991:991 > environment: > - PICTRS__STORE__TYPE=object_storage > - PICTRS__STORE__ENDPOINT=https://s3.eu-central-003.backblazeb2.com/ > - PICTRS__STORE__BUCKET_NAME=gregtech-lemmy > - PICTRS__STORE__REGION=eu-central > - PICTRS__STORE__USE_PATH_STYLE=false > - PICTRS__STORE__ACCESS_KEY=redacted > - PICTRS__STORE__SECRET_KEY=redacted > - PICTRS__MEDIA__RETENTION__VARIANTS=0d > - PICTRS__MEDIA__RETENTION__PROXY=0d > - PICTRS__SERVER__API_KEY=redacted_api_key > #- PICTRS__MEDIA__IMAGE__FORMAT=webp > #- PICTRS__MEDIA__IMAGE__QUALITY__WEBP=50 > #- PICTRS__MEDIA__ANIMATION__QUALITY=50 > volumes: > - ./volumes/pictrs:/mnt:Z > restart: always > logging: *default-logging > > postgres: > image: docker.io/postgres:16-alpine > hostname: postgres > volumes: > - ./volumes/postgres:/var/lib/postgresql/data:Z > #- ./customPostgresql.conf:/etc/postgresql.conf:Z > restart: always > #command: postgres -c config_file=/etc/postgresql.conf > shm_size: 256M > logging: *default-logging > environment: > - POSTGRES_PASSWORD=password > - POSTGRES_USER=lemmy > - POSTGRES_DB=lemmy > networks: > - database > postfix: > image: docker.io/mwader/postfix-relay > restart: "always" > logging: *default-logging > > networks: > default: > name: traefik_access > external: true > database: >
-
Test restore your backups
Went to do a test restore of one of my databases and I noticed the dump files over the last few months were all 0kb. Glad I caught it this way and not because I needed to restore. Put it on your calendar, schedule a test restore of your critical stuff a couple times a year. I know y'all are busy but it is worth the time and effort. A backup you can't actually restore isn't a backup at all.
-
How can I make my Lemmy server delete cached images?
The storage usage is at 340GB currently, which is a lot and it's rapidly increasing. I use Backblaze B2 for my storage. Here is my docker compose file: x-logging: &default-logging driver: "json-file" options: max-size: "50m" max-file: "4" ``` services: proxy: image: docker.io/library/nginx volumes: - ./nginx_internal.conf:/etc/nginx/nginx.conf:ro,Z - ./proxy_params:/etc/nginx/proxy_params:ro,Z restart: always logging: *default-logging depends_on: - pictrs - lemmy-ui labels: - traefik.enable=true - traefik.http.routers.http-lemmy.entryPoints=http - traefik.http.routers.http-lemmy.rule=Host(
gregtech.eu
) - traefik.http.middlewares.https_redirect.redirectscheme.scheme=https - traefik.http.middlewares.https_redirect.redirectscheme.permanent=true - traefik.http.routers.http-lemmy.middlewares=https_redirect - traefik.http.routers.https-lemmy.entryPoints=https - traefik.http.routers.https-lemmy.rule=Host(gregtech.eu
) - traefik.http.routers.https-lemmy.service=lemmy - traefik.http.routers.https-lemmy.tls=true - traefik.http.services.lemmy.loadbalancer.server.port=8536 - traefik.http.routers.https-lemmy.tls.certResolver=le-ssllemmy: image: dessalines/lemmy:0.19.8 hostname: lemmy restart: always logging: *default-logging volumes: - ./lemmy.hjson:/config/config.hjson:Z depends_on: - postgres - pictrs networks: - default - database
lemmy-ui: image: dessalines/lemmy-ui:0.19.8 volumes: - ./volumes/lemmy-ui/extra_themes:/app/extra_themes:Z depends_on: - lemmy restart: always logging: *default-logging environment: - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536 - LEMMY_UI_LEMMY_EXTERNAL_HOST=gregtech.eu - LEMMY_UI_HTTPS=true
pictrs: image: docker.io/asonix/pictrs:0.5 # this needs to match the pictrs url in lemmy.hjson hostname: pictrs # we can set options to pictrs like this, here we set max. image size and forced format for conversion # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp #entrypoint: /sbin/tini -- /usr/local/bin/pict-rs run --max-file-count 10 --media-max-file-size 500 --media-retention-proxy 10d --media-retention-variants 10d filesystem sled -p /mnt user: 991:991 environment: - PICTRS__STORE__TYPE=object_storage - PICTRS__STORE__ENDPOINT=https://s3.eu-central-003.backblazeb2.com/ - PICTRS__STORE__BUCKET_NAME=gregtech-lemmy - PICTRS__STORE__REGION=eu-central - PICTRS__STORE__USE_PATH_STYLE=false - PICTRS__STORE__ACCESS_KEY=redacted - PICTRS__STORE__SECRET_KEY=redacted - MEDIA__RETENTION__VARIANTS=4d - MEDIA__RETENTION__PROXY=4d #- PICTRS__MEDIA__IMAGE__FORMAT=webp #- PICTRS__MEDIA__IMAGE__QUALITY__WEBP=50 #- PICTRS__MEDIA__ANIMATION__QUALITY=50 volumes: - ./volumes/pictrs:/mnt:Z restart: always logging: *default-logging
postgres: image: docker.io/postgres:16-alpine hostname: postgres volumes: - ./volumes/postgres:/var/lib/postgresql/data:Z #- ./customPostgresql.conf:/etc/postgresql.conf:Z restart: always #command: postgres -c config_file=/etc/postgresql.conf shm_size: 256M logging: *default-logging environment: - POSTGRES_PASSWORD=password - POSTGRES_USER=lemmy - POSTGRES_DB=lemmy networks: - database postfix: image: docker.io/mwader/postfix-relay restart: "always" logging: *default-logging
#pictrs-safety:
image: ghcr.io/db0/pictrs-safety:v1.2.2
hostname: pictrs-safety
environment:
ports:
- "14051:14051"
user: 991:991
restart: always
logging: *default-logging
depends_on:
- pictrs
networks: default: name: traefik_access external: true database: ```
-
Cost of 1 gigabyte of storage over time
cross-posted from: https://lemmy.world/post/22872422
Screenshot of a Twitter post by user JonErlichman
Average cost for 1 gigabyte of storage:
45 years ago: $438,000 40 years ago: $238,000 35 years ago: $48,720 30 years ago: $5,152 25 years ago: $455 20 years ago: $5 15 years ago: $0.55 10 years ago: $0.05 5 years ago: $0.03 Today: $0.01
-
How do you extend C: partition of Windows Server 2022?
Windows Server 2022 creates one recovery partition just on the right of the C: partition. So, when it is required to expand the C: partition it is impossible due to this recovery partition. I realised of this problem because our IT department provides Windows Server virtual machines users are unable to expand.
I would like to know how are you dealing with this problem. Do you remove the recovery partition? Do you keep the recovery partition? how?
-
Is this the future of Windows?
Recently Microsoft released the link 365 which is basically a thin client for Azure. You can't run anything locally nor is there any local files. It literally just connects you to a desktop elsewhere.
Do you think this is what Windows 12 might look like? I feel like this idea is not practical for average consumers. Maybe they will make something that's like Chrome OS?
-
November 20th, 1985: Windows 1.0 is Released | The Vintage News
It's just a fad. It'll pass.
-
I hope this is the right place for this
Spent the last 3 months getting requirements for computer upgrades. After that picked out some decent laptops. (Thinkpad L and T series)
Nothing fancy, but I'm just tired of diagnosing problems with previous sysadmin purchased Vostro laptops.
After getting quotes from multiple vendors, finally got everything and sent the CEO to confirm. Guess fucking what... It got fucking denied.
"Look for cheaper laptops and replace only whats critical"
Employees are rocking 7 year old laptops with 128G SSDs! The bloody things can't even run Win 11! The whole upgrade costs less than their single "teambuilding"! I hate this these cheapskates so fucking much...
-
Remember kids, "temporary", "just poc", and similar claims are NEVER TRUE
I was literally told to set up this new service as quickly as possible and it didn't need to be correct or best practice because this was just a proof of concept.
Here we are 6 months later and I'm still cleaning up my own mess.
-
Roku TVs are interference machines (rant)
Let me tell you about the hell that is Roku. They create so much wireless interference and I can't just get rid of them.
The problem stems from WiFi direct. They automatically scan for the busiest channel and then broadcast full strength on that channel. I don't know why they do this but it creates a crazy amount of interference.
And before you ask, no you can't turn WiFi direct off. Also the remote uses WiFi direct for some reason.
WHY, ROKU WHY!
-
What skills should I cultivate do learn to be a sysadmin as a back up job opportunity?
Doing a PhD in humanities and enjoy it. I’ve recently really started to enjoy Linux, self hosting, and messing around with various lab stuff.
- arstechnica.com Thousands of hacked TP-Link routers used in yearslong account takeover attacks
The botnet is being skillfully used to launch “highly evasive” password-spraying attacks.
This is a Chinese attack that targets Azure
-
Testing DATTO backups?
Anyone here have any experience with a Datto Backup Appliance?
I have just been told that they've never run a full restoration in the six years that it's been in service, deployed for the backup of four mission critical virtual Windows Servers, four Windows Workstation and a (physical?) Linux PABX server.
The actual appliance is apparently a "Datto S3-2000 BCDR"
Edit: The anal retentive in me is going WTF in a tight loop. The industry professional with 40 years experience in the field is going, different day, same old...
I realised that I didn't actually ask the pertinent question, the hamster wheel was running full tilt, but is this normal, or is this WTF, or somewhere in-between?
-
My thoughts on Proxmox
As you all might be aware VMware is hiking prices again. (Surprise to no one)
Right now Hyper-V seems to be the most popular choice and Proxmox appears to be the runner up. Hyper-V is probably the best for Windows shops but my concern is that it will just become Azure tied at some point. I could be wrong but somehow I don't trust Microsoft to not screw everyone over. They already deprecated WSUS which is a pretty popular tool for Windows environments.
Proxmox seems to be a great alternative that many people are jumping on. It is still missing some bigger features but things like the data center manager are in the pipeline. However, I think many people (especially VMware admins) are fundamentally misunderstanding it.
Proxmox is not that unique and is built on Foss. You could probably put together a Proxmox like system without completely being over your head. It is just KVM libvirt/qemu and corosync along with some other stuff like ZFS.
What Proxmox does provide is convenience and reliability. It takes time to make a system and you are responsible when things go wrong. Doing the DIY method is a good exercise but not something you want to run in prod unless you have the proper staff and skillset.
And there is where the problem lies. There are companies are coming from a Windows/point in click background who don't have staff that understand Linux. Proxmox is just Debian under the hood so it is vulnerable to all the same issues. You can install updates with the GUI but if you don't understand how Linux packaging works you may end up with a situation where you blow off your own foot. Same goes for networking and filesystems. To effectively maintain a Proxmox environment you need expertise. Proxmox makes it very easy to switch to cowboy mode and break the system. It is very flexible but you must be very wary of making changes to the hypervisor as that's the foundation for everything else.
I personally wish Proxmox would serious consider a immutable architecture. TrueNAS already does this and it would be nice to have a solid update system. They would do a stand alone OS image or they could use something based on OStree. Maybe even build in a update manager that can update each node and check the health.
Just my thoughts
-
Anyone noticed that ransomeware has made the world a better place?
That's sounds strange to say but hear me out. Before ransomeware there was no economic incentive for companies to worry about security. There was a strong "why would you hack us" vibe that made it hard to talk management into doing anything basic like locking down ports.
Nowadays everyone and there mom is worried about getting compromised. I've seen companies who historically didn't care at all about IT suddenly invest heavily in security. We are now much more secure than we were previously as everyone has suddenly realized that the internet had a huge risk. I doubt we will see any of the old style worms we had back in the day that would infect millions of machines.
-
How to wget/curl files from OCI registries (docker, github packages)
This article will describe how to download an image from a (docker) container registry.
| [!Manual Download of Container Images with wget and curl](https://tech.michaelaltfield.net/2024/09/03/container-download-curl-wget) | |:--:| | Manual Download of Container Images with wget and curl |
Intro
Remember the good `'ol days when you could just download software by visiting a website and click "download"?
Even
apt
andyum
repositories were just simple HTTP servers that you could justcurl
(orwget
) from. Using the package manager was, of course, more secure and convenient -- but you could always just download packages manually, if you wanted.But have you ever tried to
curl
an image from a container registry, such as docker? Well friends, I have tried. And I have the scars to prove it.It was a remarkably complex process that took me weeks to figure-out. Lucky you, this article will break it down.
Examples
Specifically, we'll look at how to download files from two OCI registries.
Terms
First, here's some terminology used by OCI
- OCI - Open Container Initiative
- blob - A "blob" in the OCI spec just means a file
- manifest - A "manifest" in the OCI spec means a list of files
Prerequisites
This guide was written in 2024, and it uses the following software and versions:
- debian 12 (bookworm)
- curl 7.88.1
- OCI Distribution Spec v1.1.0 (which, unintuitively, uses the '/v2/' endpoint)
Of course, you'll need '
curl
' installed. And, to parse json, 'jq
' too.sudo apt-get install curl jq
What is OCI?
OCI stands for Open Container Initiative.
OCI was originally formed in June 2015 for Docker and CoreOS. Today it's a wider, general-purpose (and annoyingly complex) way that many projects host files (that are extremely non-trivial to download).
One does not simply download a file from an OCI-complianet container registry. You must:
- Generate an authentication token for the API
- Make an API call to the registry, requesting to download a JSON "Manifest"
- Parse the JSON Manifest to figure out the hash of the file that you want
- Determine the download URL from the hash
- Download the file (which might actually be many distinct file "layers")
| [!One does not simply download from a container registry](https://tech.michaelaltfield.net/2024/09/03/container-download-curl-wget) | |:--:| | One does not simply download from a container registry |
In order to figure out how to make an API call to the registry, you must first read (and understand) the OCI specs here.
- <https://opencontainers.org/release-notices/overview/>
OCI APIs
OCI maintains three distinct specifications:
- image spec
- runtime spec
- distribution spec
OCI "Distribution Spec" API
To figure out how to download a file from a container registry, we're interested in the "distribution spec". At the time of writing, the latest "distribution spec" can be downloaded here:
- <https://github.com/opencontainers/distribution-spec/releases/tag/v1.1.0>
- <https://github.com/opencontainers/distribution-spec/releases/download/v1.1.0/oci-distribution-spec-v1.1.0.pdf>
The above PDF file defines a set of API endpoints that we can use to query, parse, and then figure out how to download a file from a container registry. The table from the above PDF is copied below:
| ID | Method | API Endpoint | Success | Failure | |------|----------|------------------------------------|--------|-----------| | end-1 |
GET
|/v2/
|200
|404
/401
| | end-2 |GET
/HEAD
|/v2/<name>/blobs/<digest>
|200
|404
| | end-3 |GET
/HEAD
|/v2/<name>/manifests/<reference>
|200
|404
| | end-4a |POST
|/v2/<name>/blobs/uploads/
|202
|404
| | end-4b |POST
|/v2/<name>/blobs/uploads/?digest=<digest>
|201
/202
|404
/400
| | end-5 |PATCH
|/v2/<name>/blobs/uploads/<reference>
|202
|404
/416
| | end-6 |PUT
|/v2/<name>/blobs/uploads/<reference>?digest=<digest>
|201
|404
/400
| | end-7 |PUT
|/v2/<name>/manifests/<reference>
|201
|404
| | end-8a |GET
|/v2/<name>/tags/list
|200
|404
| | end-8b |GET
|/v2/<name>/tags/list?n=<integer>&last=<integer>
|200
|404
| | end-9 |DELETE
|/v2/<name>/manifests/<reference>
|202
|404
/400
/405
| | end-10 |DELETE
|/v2/<name>/blobs/<digest>
|202
|404
/405
| | end-11 |POST
|/v2/<name>/blobs/uploads/?mount=<digest>&from=<other_name>
|201
|404
| | end-12a |GET
|/v2/<name>/referrers/<digest>
|200
|404
/400
| | end-12b |GET
|/v2/<name>/referrers/<digest>?artifactType=<artifactType>
|200
|404
/400
| | end-13 |GET
|/v2/<name>/blobs/uploads/<reference>
|204
|404
|In OCI, files are (cryptically) called "
blobs
". In order to figure out the file that we want to download, we must first reference the list of files (called a "manifest
").The above table shows us how we can download a list of files (manifest) and then download the actual file (blob).
Examples
Let's look at how to download files from a couple different OCI registries:
Docker Hub
To see the full example of downloading images from docker hub, click here
GitHub Packages
To see the full example of downloading files from GitHub Packages, click here.
Why?
I wrote this article because many, many folks have inquired about how to manually download files from OCI registries on the Internet, but their simple queries are usually returned with a barrage of useless counter-questions: why the heck would you want to do that!?!
The answer is varied.
Some people need to get files onto a restricted environment. Either their org doesn't grant them permission to install software on the machine, or the system has firewall-restricted internet access -- or doesn't have internet access at all.
3TOFU
Personally, the reason that I wanted to be able to download files from an OCI registry was for 3TOFU.
| [!Verifying Unsigned Releases with 3TOFU](https://tech.michaelaltfield.net/2024/09/03/container-download-curl-wget) | |:--:| | Verifying Unsigned Releases with 3TOFU |
Unfortunaetly, most apps using OCI registries are extremely insecure. Docker, for example, will happily download malicious images. By default, it doesn't do any authenticity verifications on the payloads it downloaded. Even if you manually enable DCT, there's loads of pending issues with it.
Likewise, the macOS package manager brew has this same problem: it will happily download and install malicious code, because it doesn't use cryptography to verify the authenticity of anything that it downloads. This introduces watering hole vulnerabilities when developers use brew to install dependencies in their CI pipelines.
My solution to this? 3TOFU. And that requires me to be able to download the file (for verification) on three distinct linux VMs using curl or wget.
> ⚠ NOTE: 3TOFU is an approach to harm reduction. > > It is not wise to download and run binaries or code whose authenticity you cannot verify using a cryptographic signature from a key stored offline. However, sometimes we cannot avoid it. If you're going to proceed with running untrusted code, then following a 3TOFU procedure may reduce your risk, but it's better to avoid running unauthenticated code if at all possible.
Registry (ab)use
Container registries were created in 2013 to provide a clever & complex solution to a problem: how to package and serve multiple versions of simplified sources to various consumers spanning multiple operating systems and architectures -- while also packaging them into small, discrete "layers".
However, if your project is just serving simple files, then the only thing gained by uploading them to a complex system like a container registry is headaches. Why do developers do this?
In the case of brew, their free hosing provider (JFrog's Bintray) shutdown in 2021. Brew was already hosting their code on GitHub, so I guess someone looked at "GitHub Packages" and figured it was a good (read: free) replacement.
Many developers using Container Registries don't need the complexity, but -- well -- they're just using it as a free place for their FOSS project to store some files, man.
-
What's the best way to monitor and log which processes are responsible for high system load throughout the day? Tools like top and htop only provide immediate values, but I'm looking for a solution
What's the best way to monitor and log which processes are responsible for high system load throughout the day? Tools like top and htop only provide immediate values, but I'm looking for a solution that offers historical data to identify the main culprits over time.
-
Thousands of Devices Wiped Remotely Following Mobile Guardian Hack - SecurityWeek
Discussion question: Are we to centralized? (I know Lemmy isn't unbiased)
- www.theregister.com ICANN approves use of .internal domain for your network
Vint Cerf revealed Google already uses the string, as do plenty of others
-
Knowledge share: How to use qemu on Windows with acceleration in 2024
So using qemu with hyper-V acceleration is something that is not well documented. Historically, you would setup HAXM but that has been discontinued and deprecated.
To use qemu on WIndows with hardware acceleration you first start by enabling Hyper-V if it isn't enabled already. Next, run qemu with the following additional option:
--accel whpx,kernel-irqchip=off
In qtemu on Windows there is a GUI option to do this. I like qemu because it cleaner than pure Hyper-V and doesn't have the licensing issues that Virtualbox does. I also like that Linux guests have native support for virtual devices.
https://www.qemu.org/docs/master/system/qemu-manpage.html
-
SumatraPDF: a lightweight FOSS PDF reader (can not edit)
For those who want a alternative to Adobe without using Edge
-
Proxmox_gk: a shell tool for deploying LXC/QEMU guests, with Cloud-init
forum.proxmox.com [TUTORIAL] - Proxmox automator for deploy LXC and QEMU guests, with Cloud-initGood evening everyone, I've just released a small command line utility for Proxmox v7, 8 to automate the provisioning and deployment of your containers and virtual machines with Cloud-init. Key features: Unified configuration of LXC and QEMU/KVM guests via Cloud-init. Flexible guest...
-
Firefox cert issue
So we run VMware, and this morning I go and check a thing, and Firefox gives me an error.. connection insecure cert is invalid
No I don’t have the exact verbiage
But Edge and Chrome opened it just fine. Whisky Tango?
It was a rekeyed , and re installed the cert for an easy ish fix.
But I’m far more weirded out that FF slapped it down ; and the other two were like; Ja sure no problem…
??
Maybe should x post to c/firefox as well
- techcrunch.com CrowdStrike offers a $10 apology gift card to say sorry for outage | TechCrunch
Several people who received the CrowdStrike offer found that the gift card didn't work, while others got an error saying the voucher had been canceled.
-
How to Bypass Bitlocker for Crowdstrike BSoD (fix)
Took me a few hours to figure this out, figured I'd pass it along. Forgive formatting, I'm on mobile.
How to Bypass Bitlocker for Crowdstrike BSoD
Only use this if the Bitlocker key is lost.
From the Bitlocker screen, select Skip This Drive. A command prompt will appear.
Type bcdedit /set {default} safeboot network and press Enter.
Type Exit to exit the command prompt, then select Shut Down
Hardwire the device to the network
Login as an admin account
Navigate to C:\Windows\System32\Drivers\Crowdstrike and delete C:\windows\system32\drivers\crowdstrike\c-00000291-*.sys
Win+R to open the Run menu, then type msconfig and press Enter
Go to Boot
Uncheck the box for SafeBoot
You will receive a warning about Bitlocker. Proceed.
Click OK and you will be prompted to restart. Do so.
Have the user login
Test their access to files