BK is not bad in Italy, but I would argue Five Guys is better. Although they don't do "Italian" burgers like the one you described, still I feel their burgers are much more tasty than BKs.
What if instead you used something that's meant to be used to take notes but that also has querying capabilities?
I have been using Silverbullet for a while and I absolutely love it. It uses Markdown files in disk so it's very easy to backup, have secondary instances running and even just edit files directly with any other program. But also provides some extra syntax to define objects and query them, so you can for example build a library of recipes and have a page that lists all of the ones that have a specific tag or take less than X time to cook or whatever.
If you don't give immich write access to photos you lose one of their biggest advantages, i.e. having your phone upload the photos directly. So now you need something else like syncthing to do that job, which is not as elegant.
The claim was it "contains every 15 seconds audio recording you can imagine. Every single one.". Which is bullshit, that's like saying this program contains every single literally work:
import sys
print(sys.argv[1])
It's just adding a layer of encoding on top so it feels less bullshity, something like:
def decode(number: int):
out = ""
while number:
number, letter_index = divmod(number, len(string.printable))
out += string.printable[letter_index]
return out
That also does not contain every possible (ASCII) book, it can decode any number into a text, and some numbers happen to contain texts that are readable.
I call bullshit on that. Every second there are 44100 samples of 8 bit, so every second of sound is 44100 bytes, or 44kB. Even 1 second of audio is impossible to generate all possibilities.
To put this in perspective, there's something called Universally Unique Identifier (UUID for short), one of them is 128 bits, or 16 bytes. Let's imagine these were 1 bit long, on the second attempt at generating an id you would have a 50% chance of generating a repeated one, which means that by the third one you generate the chances that you have already generated a repeated id are 50%; If we extend this to 1 byte (i.e. 256 possibilities) the second time you have 1/256 chance of generating a repeated one, the second time 1/255, so on, and so forth. So from the third one on your chances of having already generated a duplicated id are 1/256 + 1/255 + 1/254 + ... This means that by the 103th id you generate you have a 50% chance to have already generated a repeated one; why did I do those examples? Because a UUID has 16 bytes, this means that if you generated a billion UUID per second, it would take you 100 years to have a 50% chance of having generated a repeated one, and by that time you would need 43 ZB of storage (that's not a typo, it's Zettabytes as in 1024 EB (which is also not a typo, that's Exabytes which is 1024 PB (which is also not a typo, that's Petabytes which is 1024 TB, or Terabytes which is the first measure people are likely to be familiar with))).
Let me again try to put this in perspective, if Google, Amazon, Microsoft and Facebook emptied all of their storage just for this, they wouldhave around 2 Exabytes, so you would need a company 4300x larger than that conclomerate to have enough space to store the amount of unique ids that would be generated from a 16 byte random data (until you have a 50% chance of generating a repeated one).
Another way of thinking about this is that to store all of the possible combinations of 1 bit you need 2 bits of space, for 2 bits is 4, for 3 bits is 8, it goes on exponentially, so that for n bits is 2^n. For the UUID that is 3.4E38, or 3.5E13 YB (again, not a typo, that's 1024 Zettabytes), i.e 35000000000000 YB (I could go up a few more orders of magnitude, but I think I made my point). And this is for 128 bits, every bit doubles that amount.
So again, I call bullshit that they have all possible sounds for even 1 second which is almost 3x that amount.
Extra question for people who have been using it. It says the bandwidth is unlimited, how unlimited are we talking about?. I was considering getting one to use as a reverse-proxy into my home lab to be accessible from the outside, which would mean lots of bandwidth usage, media streaming amounts.
That's interesting, although most of it is directed at people building the images, the fact that pushing without a tag sets the latest is something I did not know and something that I could see the human factor causing a problem.
Why? Latest means latest stable for most services
That's the thing, if the project is too early to have a stable enough structure to allow for programatical updates then it's probably too early to offer something "perpetual"
I agree, I'm not trying to bad mouth the project, I just feel that they shouldn't change from a donation structure until they have a stable version of the product.
Yeah, I have high hopes for the project, it ticks almost every box for me. I would still prefer to be able to store tags in the actual images and use them and also be able to recover a library already in the proper folder (so in the case of a catastrophic failure, reimporting the full library is a matter of minutes not days, not to mention having to retag people, etc).
My point is that projects should ask for donations when they're so early in development, asking for a subscription implies you have a stable product.
Yup, and I'm fine with that, but I think that switching from a donation to a subscription model before then is wrong.
Why do you think they need outsourcing? Do you really think that 100 people is not enough?
I don't mind this model. That being said for me Immich is great but has a fatal flaw that has prevented me from using it: it doesn't do updates.
For me that's a big one, everything else I self host I have a docker compose pointing to latest, so eventually I do a pull and up and I'm done, running the latest version of the thing. In Immich this is not possible, I discovered the hard way that they are not backwards compatible and that if you do that you need to keep track of their release notes to know what you need to manually do to update.
I haven't settled on a self-hosted photo management because of this. In theory Immich has almost everything I want (or more specifically, all of the other solutions I found lack something), but having to keep track of releases to do manual upgrades is stupid, this is a software, it should be easy to have it check the version on start and perform migration tasks if needed.
For mounting it's a bit trickier, just like you added an entry to fstab to say that you wanted to mount (for example ) /dev/sdb2
on /ntfs
you would need to add another one saying you want to mount /ntfs/downloads
to /home/<username>/Downloads
. If you want to run this as a one off the command is mount --bind /ntfs/downloads /home/<username>/Downloads
(but note that running this with a command will become undone when you reboot, the only way to preserve it after reboots is to have an fstab entry)
What this does is essentially at the kernel level say that one path is the other. How is this different from a link? Well, a link is just a file that points to the other place, whereas a mount is the other place. A couple of examples on how this is different:
-
If you had a Download folder you would need to rename or delete it before making a link there. Mounting on the other hand necessitates that the Downloads folder exists, and will obfuscate anything inside it while the other folder is mounted. This means that if you had files inside Downloads and you mounted the other folder on top those files are still in the disk, but you have no way of accessing them until you unmount the folder.
-
Links can't go outside of your system. This is likely not important to you, but if you for example are doing things with chroot or docker this can become a problem.
In short, a link is like a door that when you open it tells you "go to the other door", whereas the mount is replacing the room behind that door with another one. Most programs are smart enough to go to the other door, and on most cases the other door exists so all is good. On some edge cases (like I said, docker, chroot, etc) the "go to door X" could be a problem if inside the client system X doesn't exist.
Ps: I don't know of any way of doing this graphically, this is advanced stuff so likely it's expected that people who want to mount folders know enough to do it in a terminal
Hey man, I think this is a perfectly valid question to ask here. Also I was one of the people who replied on the other thread as well.
So, let's start with the why. I imagine you want to have ~/Downloads
be inside your large disk so files get automatically downloaded there, I imagine ~/Documents
is to have access to the same documenta on both OSs. If that's not the why or there's something else let me know as I'll be basing my answer on this assumption.
Last time we told you about how you can mount things wherever you want to, I imagine by now you have an entry on your fstab that automatically mounts that NTFS drive somewhere. I'll call that somewhere /ntfs
just to give it a name/path, but any other path should be the same.
If you wanted your ENTIRE NTFS partition to be on ~/Downloads
it's as easy as changing that fstab entry from /ntfs
to /home/gpstarman/Downloads
(or whatever your username is). But I imagine you want something more complex, you want to have /ntfs/downloads
and ~/Downloads
to be the same directory.
Like you found out there are two ways to do this, the first and most easy one is to create a link. To do so graphically just open whatever file explorer you use right click and drag from one path to the other and you should have an option link here
or something similar. Note that you might need to delete or rename your existing ~/Downloads
folder to have the link be named that. If you wanted to do it by command line it's ln -s <target> <link name>
, so in your hypothetical case ln -s /ntfs/downloads ~/Downloads
This should work for 99% of cases and honestly I don't think you should care too much about mounting. I'll reply to this comment with the steps for mounting and explaining why it's different just to be on the safe side.
You should read about statistics. An aim-bot will be consistently accurate, humans are not consistently accurate. If your aim-bot is purposefully inaccurate then it's useless. Long story short, your cheating has to be indistinguishable from human, which is HARD to accomplish, and if you do you'll lose 50% of the matches against other humans.
Not to mention a game with server side anti-cheat could purposefully send fake data, e.g. send a position for an "invisible" enemy, if you aim/fire to it you get tagged. It can do lots of similar stuff that would make the aim-bot less accurate than a human, e.g. every time an enemy enters line of sight add another enemy just outside of the frustum culling, or send an enemy behind a wall that has no visible parts. Cheaters will act on that information, regular users won't. At that point the only way to bypass that is with external hardware that acts on the same information an actual user does (which also bypasses client side anti-cheat anyways), at that point you have a robot playing the game for you and losing 50% of the battles....
I'm not the person you were talking with, but I mostly agree with them.
Here's the thing, client side anti-cheating is a losing battle, it's the equivalent of adding spikes to your key so you can give it to someone so they won't be able to open your door, once they have the key they can remove the spikes. Client side anti-cheat can ALWAYS be bypassed, they rely on security by obscurity to prevent people from removing the actual check, but it's a losing battle, no exceptions.
Server side anti-cheat is the only method that has the possibility of being accurate. Like you said, you can make your aim-bot be indistinguishable from human, but then you're going to be on a human level and other humans might beat you. Any game that worries about this already has a skill based matchmaking, which means that cheaters will end up playing with other cheaters or humans with a similar level of skill, so who cares?. You might get one cheater that's still ranking up on a match, but on the long run they'll cluster together.
No, because the thing that's breaking is not graphics related (which is the major difference between Windows and Unix), it's very likely an anti-cheating measure that's trying to gain root access to the computer, and when it can't it crashes the game. This is not an issue on Playstation because Sony controls the OS so they're okay with giving root access to the game or (more likely) are okay with that part of the code not running while on a PlayStation.
AFAIK the multiplayer was working for Playstation titles, so much that some of them got the verified check and that's supposed to only be given if everything works. So yes, they're purposefully breaking it by adding a dependency on (probably) kernel level anti-cheat, since any other Windows API could be converted by wine (but kernel level apis are purposefully checking if it's being run on wine to throw an error)