Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NN
N0x0n @lemmy.ml
Posts 17
Comments 385
Findroid is a native android jellyfin client.
  • Being able to stream my shows on an unstable or lower bandwidth internet connection like on a train

    Oh yeah good point wasn't thinking of that kind of use case. Internet is available everywhere now and I'm so used to gigabit Ethernet and high-speed WiFi/5g that I forgot the low speed of public WiFi or locations where the connection can get unstable.

    You could argue I should adapt my habits to my means but I frankly really think it should be the other way around, and transcoding solves that for me.

    In the past I probably would ^^" but today it's nearly impossible if you want a balanced life in a daily working/study routine. There's so much to do, to much to think of, to much information... Automating stuff is where you can gain hours in the long run, so I totally get it !

    Thanks for your answer !

  • Findroid is a native android jellyfin client.
  • What kind of stylized subtitles? I do not have a big library so I have never encountered this kind of trouble. But I'm curious to know to circumvent in advance.

    Most anime have .ASS subtitles and are kinda complex sometimes with singsong related subtitles, but never had any issues on android with them.

    And most movies have simple plain text subtitles.

  • Findroid is a native android jellyfin client.
  • Just a personal use case, maybe it isn't an advantage. But the official android app is just a web wrapper and the use of MPV as external player don't allow self-signed local certificates (and they never will...).

    Findroid does the job for you while using MPV under the hood and you can connect to your local DNS with self-signed certs without any issues :).

  • Findroid is a native android jellyfin client.
  • May I ask why? Maybe I haven't been in your actual case so I probably can't relate.

    However having everything in a format that every device can read and disable transcoding on jellyfin, saves resources and power usage.

  • I am the fool (linux install)
  • Yeah maybe I got so used to SSD's that I can't remember the leap between SSD's and HDD's.

    An as you said the difference between M.2 isn't that much of a difference in game. There probably lies my bias.

  • I am the fool (linux install)
  • Does it really make that of a difference? Sure I use SSD's for a long time now but haven't seen that much of a speed improvement over HDD's in games. Even with a m.2, haven't seen any improvement.

    However data transfer speed is another story !

  • FluxTube | Flutter/Dart YouTube Client! | This could be the new Newpipe
  • Yep ! No-ads, no-sponsor, no-shit.

    You even don't need to self-host, just disable piped proxy, enable local extraction, use HLS and a good VPN.

    Sure it's not as anonymous and sometime I need to disable my VPN, but that's only temporarily, until they find a new loophole in youtoube's api.

    That's not piped nor invidious backend's fault, just YouTube doing his cat and mouse thing...

  • Mozilla grants Ente $100k
  • Nobody ever talking about lychee ?

    Yes okay it's not GPL or written in a fancy new language (PHP is still alive xD). But it's simple, elegant, no UX bloat, no ML or IA stuff... Just a plain simple self-hosted photo manager.

    One thing I really liked about it, you can import you external photo's with .xmp files, just one checkbox away.

    The tag feature is simple but working as expected. Nothing fancy but it does best what's it's supposed to do !!

    Call me old boomer but I really like the simplicity of lychee. It's a bit like how reading an article from miniflux or wallabag... Simple html files without bloating your eyes or your brain...

    Just my 2c, nothing to see here !

  • PSA/HOWTO: Avoid fake mkv torrents. Avoid getting hacked
  • For those interested, John Hammond did a video a few months ago about .lnk extension (and other 16 hidden extensions on Windows).

    He doesn't go to much and to deep into the subject, but you get a general view how this could be exploitable.

    YouTube link

    Piped Link

  • What's the trick to Menopause?
  • Why the hate? Natural/traditional based therapy is used way longer than classical medecin, and we survived till today...

    People tend to forget that classical medecin is not older than a few hundred years. Gosh people... Shutdown your brain and open your minds !

  • linux or windows?
  • I do use a Mac and I hate it... It's a birthday gift from my family, because owning a Mac makes the "man"...

    Uuhg, I always need to learn things twice... First how it works on Linux and than how to reproduce the same on Mac...

    There are to many shitty workarounds that do not behave the same way Linux does even though it's UNIX based.

    • .plist files comes to mind
    • how to make a samba share mount on boot/access
    • Default's to zsh
    • Shitty default terminal and dumb keyboard shortcuts...
    • Default applications are useless... (Thanks homebrew 👏)

    I fucking hate it... And after 4 years of intense use I still do not understand why people would willingly buy something like that closed crap ecosystem. Maybe just a hipster thing...

  • What's the trick to Menopause?
  • There are already alot of good comments so I will emphasize on something that hasn't been already said.

    Nature is full of wonders, It won't be a one time miracle help, however you can always use some complementary assistance from a GOOD herbalist which will advise you specific herbs in a variety of forms (tea, essential oils, infusion....) to relax and maybe reduce some symptoms over time?

    I heard sage tea is very feminine and good for everything related to menopause. I'm no expert so don't take it for granted.

    While it won't solve or heal your partner's health directly, it could be a good complementary to any help session. Nice and warm infusion, bath with scented EO, massage with a room filled with EO diffusion.... There's a lot you can do with natures help :)

  • Docker and Databases: Why choose one over another? Does it matter?

    Hi everyone !

    Intro

    Was a long ride since 3 years ago I started my first docker container. Learned a lot from how to build my custom image with a Dockerfile, loading my own configurations files into the container, getting along with docker-compose, traefik and YAML syntax... and and and !

    However while tinkering with vaultwarden's config and changing to postgresSQL there's something that's really bugging me...

    Questions

    ---

    • How do you/devs choose which database to use for your/their application? Are there any specific things to take into account before choosing one over another? ---
    • Does consistency in database containers makes sense? I mean, changing all my containers to ONLY postgres (or mariaDB whatever)? ---
    • Does it make sense to update the database image regularly? Or is the application bound to a specific version and will break after any update? ---
    • Can I switch between one over another even if you/devs choose to use e.g. MariaDB ? Or is it baked/hardcoded into the application image and switching to another database requires extra programming skills? ---

    Maybe not directly related to databases but that one is also bugging me for some time now:

    • What's redis role into all of this? I can't the hell of me understand what is does and how it's linked between the application and database. I know it's supposed to give faster access to resources, but If I remember correctly, while playing around with Nextcloud, the redis container logs were dead silent, It seemed very "useless" or not active from my perspective. I'm always wondering "Humm redis... what are you doing here?".

    Thanks :)

    5

    Exiftool bash script to process image in a specific time range recursively.

    Edit

    After a long process of roaming the web, re-runs and troubleshoot the script with this wonderful community, the script is functional and does what it's intended to do. The script itself is probably even further improvable in terms of efficiency/logic, but I lack the necessary skills/knowledge to do so, feel free to copy, edit or even propose a more efficient way of doing the same thing.

    I'm greatly thankful to @[email protected], @[email protected], @[email protected] and Phil Harvey (exiftool) for their help, time and all the great idea's (and spoon-feeding me with simple and comprehensive examples ! )

    How to use

    Prerequisites:

    • parallel package installed on your distribution

    Copy/past the below script in a file and make it executable. Change the start_range/end_range to your needs and install the parallel package depending on your OS and run the following command:

    time find /path/to/your/image/directory/ -type f | parallel ./script-name.sh

    This will order only the pictures from your specified time range into the following structure YEAR/MONTH in your current directory from 5 different time tag/timestamps (DateTimeOriginal, CreateDate, FileModifyDate, ModifyDate, DateAcquired).

    You may want to swap ModifyDate and FileModifyDate in the script, because ModifyDate is more accurate in a sense that FileModifyDate is easily changeable (as soon as you make some modification to the pictures, this will change to your current date). I needed that order for my specific use case.

    From: '-directory<$DateAcquired/' '-directory<$ModifyDate/' '-directory<$FileModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/'

    To: '-directory<$DateAcquired/' '-directory<$FileModifyDate/' '-directory<$ModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/'

    As per exfitool's documentation: > ExifTool evaluates the command-line arguments left to right, and latter assignments to the same tag override earlier ones.

    ``` #!/bin/bash

    if [ $# -eq 0 ]; then echo "Usage: $0 <filename>" exit 1 fi

    Concatenate all arguments into one string for the filename, so calling "./script.sh /path/with spaces.jpg" should work without quoting

    filename="$*"

    start_range=20170101 end_range=20201230

    FIRST_DATE=$(exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$filename" | tr -d '-' | awk '{print $1}')

    if [[ "$FIRST_DATE" != '' ]] && [[ "$FIRST_DATE" -gt $start_range ]] && [[ "$FIRST_DATE" -lt $end_range ]]; then exiftool -api QuickTimeUTC -d %Y/%B '-directory<$DateAcquired/' '-directory<$ModifyDate/' '-directory<$FileModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/' '-FileName=%f%-c.%e' "$filename"

    else echo "Not in the specified time range"

    fi

    ```

    --- ---

    Hi everyone !

    Please no bash-shaming, I did my outmost best to somehow put everything together and make it somehow work without any prior bash programming knowledge. It took me a lot of effort and time.

    While I'm pretty happy with the result, I find the execution time very slow: 16min for 2288 files.

    On a big folder with approximately 50,062 files, this would take over 6 hours !!!

    If someone could have a look and give me some easy to understand hints, I would greatly appreciate it.

    What Am I trying to achieve ?

    Create a bash script that use exiftool to stripe the date from images in a readable format (20240101) and compare it with an end_range to order only images from that specific date range (ex: 2020-01-01 -> 2020-12-30).

    Also, some images lost some EXIF data, so I have to loop through specific time fields:

    • DateTimeOriginal
    • CreateDate
    • FileModifyDate
    • DateAcquired

    The script in question

    ``` #!/bin/bash

    shopt -s globstar

    folder_name=/home/user/Pictures start_range=20170101 end_range=20180130

    for filename in $folder_name/**/*; do

    if [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateTimeOriginal "$filename") =~ ^[0-9]+$ ]]; then DateTimeOriginal=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -DateTimeOriginal "$filename") if [ "$DateTimeOriginal" -gt $start_range ] && [ "$DateTimeOriginal" -lt $end_range ]; then /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$DateTimeOriginal/' '-FileName=%f%-c.%e' "$filename" echo "Found a value" echo "Okay its $(tput setab 22)DateTimeOriginal$(tput sgr0)"

    fi

    elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -CreateDate "$filename") =~ ^[0-9]+$ ]]; then CreateDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -CreateDate "$filename") if [ "$CreateDate" -gt $start_range ] && [ "$CreateDate" -lt $end_range ]; then /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$CreateDate/' '-FileName=%f%-c.%e' "$filename" echo "Found a value" echo "Okay its $(tput setab 27)CreateDate$(tput sgr0)" fi

    elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -FileModifyDate "$filename") =~ ^[0-9]+$ ]]; then FileModifyDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -FileModifyDate "$filename") if [ "$FileModifyDate" -gt $start_range ] && [ "$FileModifyDate" -lt $end_range ]; then /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$FileModifyDate/' '-FileName=%f%-c.%e' "$filename" echo "Found a value" echo "Okay its $(tput setab 202)FileModifyDate$(tput sgr0)" fi

    elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateAcquired "$filename") =~ ^[0-9]+$ ]]; then DateAcquired=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -DateAcquired "$filename") if [ "$DateAcquired" -gt $start_range ] && [ "$DateAcquired" -lt $end_range ]; then /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$DateAcquired/' '-FileName=%f%-c.%e' "$filename" echo "Found a value" echo "Okay its $(tput setab 172)DateAcquired(tput sgr0)" fi

    elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -ModifyDate "$filename") =~ ^[0-9]+$ ]]; then ModifyDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -ModifyDate "$filename") if [ "$ModifyDate" -gt $start_range ] && [ "$ModifyDate" -lt $end_range ]; then /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$ModifyDate/' '-FileName=%f%-c.%e' "$filename" echo "Found a value" echo "Okay its $(tput setab 135)ModifyDate(tput sgr0)" fi

    else echo "No EXIF field found"

    done

    ```

    Things I have tried

    1. Reducing the number of if calls

    But it didn't much improve the execution time (maybe a few ms?). The syntax looks way less readable but what I did, was to add a lot of or ( || ) in the syntax to reduce to a single if call. It's not finished, I just gave it a test drive with 2 EXIF fields (DateTimeOriginal and CreateDate) to see if it could somehow improve time. But meeeh :/.

    ``` #!/bin/bash

    shopt -s globstar

    folder_name=/home/user/Pictures start_range=20170101 end_range=20201230

    for filename in $folder_name/**/*; do

    if [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateTimeOriginal "$filename") =~ ^[0-9]+$ ]] || [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -CreateDate "$filename") =~ ^[0-9]+$ ]]; then DateTimeOriginal=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -DateTimeOriginal "$filename") CreateDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -CreateDate "$filename") if [ "$DateTimeOriginal" -gt $start_range ] && [ "$DateTimeOriginal" -lt $end_range ] || [ "$CreateDate" -gt $start_range ] && [ "$CreateDate" -lt $end_range ]; then /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$DateTimeOriginal/' '-directory<$CreateDate/' '-FileName=%f%-c.%e' "$filename" echo "Found a value" echo "Okay its $(tput setab 22)DateTimeOriginal$(tput sgr0)"

    else echo "FINISH YOUR SYNTAX !!" fi

    fi done

    ``` 2) Playing around with find

    To recursively find my image files in all my folders I first tried the find function, but that gave me a lot of headaches... When my image file name had some spaces in it, it just broke the image path strangely... And all answers I found on the web were gibberish, and I couldn't make it work in my script properly... Lost over 4 yours only on that specific issue !

    To overcome the hurdle someone suggest to use shopt -s globstar with for filename in $folder_name/**/* and this works perfectly. But I have no idea If this could be the culprit of slow execution time?

    1. Changing all [ ] into [[ ]]

    That also didn't do the trick.

    How to Improve the processing time ?

    I have no Idea if it's related to my script or the exiftool call that makes the script so slow. This isn't that much of a complicated script, I mean, it's a comparison between 2 integers not a hashing of complex numbers.

    I hope someone could guide me in the right direction :)

    Thanks !

    15

    Manjaro, out of curiosity question, does the image on boot has any security implication regarding logoFAIL?

    Hi everyone :).

    Just getting started with Manjaro as daily drive to get some easier arched based distro. Except for the LVM bug with calamares everything is pretty smooth :).

    But at first boot, I saw they have added their personal Manjaro logo on boot and I directly though of the bug exploit logoFAIL I heard a few month ago and It made me curious if this is something that could be exploitable by Manjaro.

    Probably not, this would harm their image and hard worked system, but I'm still curious... If someone smarter/more knowledgeable than me could chime in and give some valuable information on this topic regarding Manjaro, I would really appreciate it !

    Thank you !

    30

    What will happend to simpleX if the new laws happend to be voted in the EU ? :/

    Hi everyone.

    I'm curious to understand what could happened to simpleX if the new "security" plan in EU gets voted?

    Because I'm not versed enough with the political and legal wording in thoses papers I've got a hard time to actually understand.

    • Will simpleX be obligated to comply?
    • Will simpleX retire from EU?
    • Would It be illegal to use simpleX if the bill passes?
    • Could we still use simpleX with a proxy/VPN from a country outside of EU?
    • ...

    I'm genuinely concerned about what I'm reading here and there on lemmy... I hope someone could give me some interesting point of view.

    Thanks.

    4

    Virtual networking docker (bridge)

    cross-posted from: https://lemmy.ml/post/15968883

    Hello everyone ! Nobody seems to have an answer on [email protected] (or maybe they are not interested because it's an enteprise network community?) and [email protected] seems dead?

    Anyway, If anyone could guide me or direct me to the right direction, I would really appreciate it !

    ---

    TL:DR

    What is encapsulated into the frame that makes everyone understand: "OHHH that’s for 10.0.0.8, your docker container on bridge network br-b1de on the veth2b interface !!! "

    --- Hi everyone !

    I'm scratching my head in finding an actual answer on how virtual networking in docker actually works (mostly on the packets/frame level) or some good documentation to improve my understanding on how everything fits together.

    Because I'm probably lacking the correct network terminology I made a simple network topology of my network. Don't hesitate to correct any network mistake.

    !

    In my scenario, my docker container with the virtual interface veth2b22c98 and the following ip (10.0.0.8) connects to bridge network br-b1de95b5ea89. When I curl, from my conntainer, lemmy.ml the packets/frame is send to my enp4s0 and goes through my wireguard tunnel to my VPN provider which sends back the packet/frame/handshake...

    I probed every interface with tcpdump (enp4s0, wg0, br-b1,veth2b):

    • enp4s0: Every packet/frame is encapsulated into the wireguard protocol with my physical interface's IP (192.168.1.30) and no DNS is visible on that interface (like expected) and sends it out to my ISP's public IP.

    • wg0: Shows every packet/frame with the actual protocol with my wireguard's interface IP (192.168.2.1) with the destination IP of lemmy.ml (Dst: 54.36.178.108)

    • br-b1: Shows every packet/frame with the actual protocol with my containers IP (10.0.0.8) with the destination IP of lemmy.ml (Dst: 54.36.178.108)

    ---

    I know there is a mix of 2 different concepts in my scenario (wireguard tunnel and virtual networking) but I really do not understand how the frame gets back to my docker container. When I look at the frames on wg0, there is no mention of either the MacAddress of my container or the actual IP of my container.

    How/when/what ? is exactly happening to my frame so that it gets to the correct target between my physical interface, virtual interface, bridge ? I mean with VLAN's there's a VLAN tag on the frame, so you can easily identify with Wireshark where it should go. But here, I cannot find any clue who or what is doing the magic so the frame finds it's way back to my docker container.

    What is encapsulated into the frame that makes everyone understand: "OHHH that's for 10.0.0.8, your docker container on bridge network br-b1de on the veth2b interface !!! "

    ---

    Sorry for my broken English and lack of networking terminology and thank you for those who beared with me and are willing the give me some hints/proper networking lesson.

    2

    Virtual networking docker (bridge)

    Edit: Whoops I just read that [email protected] is for enterprise networks? I hope my small homelab question doesn't break the rules? If so I will redirect my question.

    ---

    Hi everyone !

    I'm scratching my head in finding an actual answer on how virtual networking in docker actually works (mostly on the packets/frame level) or some good documentation to improve my understanding on how everything fits together.

    Because I'm probably lacking the correct network terminology I made a simple network topology of my network. Don't hesitate to correct any network mistake.

    !

    In my scenario, my docker container with the virtual interface veth2b22c98 and the following ip (10.0.0.8) connects to bridge network br-b1de95b5ea89. When I curl, from my conntainer, lemmy.ml the packets/frame is send to my enp4s0 and goes through my wireguard tunnel to my VPN provider which sends back the packet/frame/handshake...

    I probed every interface with tcpdump (enp4s0, wg0, br-b1,veth2b):

    • enp4s0: Every packet/frame is encapsulated into the wireguard protocol with my physical interface's IP (192.168.1.30) and no DNS is visible on that interface (like expected) and sends it out to my ISP's public IP.

    • wg0: Shows every packet/frame with the actual protocol with my wireguard's interface IP (192.168.2.1) with the destination IP of lemmy.ml (Dst: 54.36.178.108)

    • br-b1: Shows every packet/frame with the actual protocol with my containers IP (10.0.0.8) with the destination IP of lemmy.ml (Dst: 54.36.178.108)

    ---

    I know there is a mix of 2 different concepts in my scenario (wireguard tunnel and virtual networking) but I really do not understand how the frame gets back to my docker container. When I look at the frames on wg0, there is no mention of either the MacAddress of my container or the actual IP of my container.

    How/when/what ? is exactly happening to my frame so that it gets to the correct target between my physical interface, virtual interface, bridge ? I mean with VLAN's there's a VLAN tag on the frame, so you can easily identify with Wireshark where it should go. But here, I cannot find any clue who or what is doing the magic so the frame finds it's way back to my docker container.

    What is encapsulated into the frame that makes everyone understand: "OHHH that's for 10.0.0.8, your docker container on bridge network br-b1de on the veth2b interface !!! "

    Sorry for my broken English and lack of networking terminology and thank you for those who beared with me and are willing the give me some hints/proper networking lesson.

    ---

    Edit: Changed something on my network diagram (wireguard is not in a container it's bare bone on the server) and some typo.

    1
    homelab @lemmy.ml N0x0n @lemmy.ml

    Beginner homelab (router/switch)

    Hi everyone :)

    It's time to switch and give my home network a proper minimal hardware upgrade. Right now everything is managed by my ISP's AIO firewall/router combo. Which works okayish, but I'm already doing some firewall/dns/VPN stuff on my minimal spare laptop server to bypass most of my ISP's restrictions. So it's time to get a little bit "crazy" !

    While I do have some "power user" knowledge regarding Linux/server/selfhosted services/networking, I'm a bit clueless hardware wise, specially regarding my ISP's 2.5G ethernet port.

    I do have a 5giga connection from my Internet provider (Obtic fiber) which is divided into 4 ethernet ports (Eth1 2.5G, Eth2 1G, Eth3 1G, Eth4 0,500G or something in that range). And right now the Eth1 port is connected through an old 1G switch.

    1. To take full advantage of my ISP's 2.5G ethernet port do I need a router AND a switch capable of 2.5G througput ? Or only the router and the switch is going to divid it accordingly between all connected devices on a 1G switch?

    I'm also looking for some recommendation/personal experience for a router and a switch with a budget of 250e.

    First I was interested into a BananaPI as a router, to tinker a bit, but it seems a bit of a hassle to flash it with OpenWRT, then I found an interesting post on Lemmy talking about the Intel N100 Celeron N5105, which looks like more what I'm looking for but I'm not sure ?

    1. I have no idea what's the best bet, a SBC (bananapi mini, orange pi, raspberry pi...) a fully fleged router (like TP-Link AX1800 and flash it with opensense/openwrt) or an Intel N100 Celeron N5105 Soft Router ?

    The capabilities I'm looking for:

    • VLAN capable
    • AP VLAN capabable to segment wifi
    • Taking advantage of my ISP's 2.5G ethernet port
    • Firewall customization capabilities

    I have an eye on a managed switch I found on amazon (SODOLA 6 Port 2.5G Web Managed) but I have no idea how reliable they are, I have never heard of SODOLA.

    1. Any good recommendation I should look at for a managed switch that would work great with the same capabilities above?

    2. Probably last question, is regarding wifi APs. Is it possible to make an access point from my router even tough it hasn't atennas? If I connect an access point directly to my router, will it be capable of giving away wifi connection?

    Thanks for reading though, I'm a bit unsure how I should spend my money to have a minimal but reliable/capable homelab setup. Every advice is welcome. But keep in mind, I want to keep it minimal, a good enough routing capbability with intermediate firewall customisation. I'm already hosting a few containers with a spare laptop and the traffic isn't going to be to crazy.

    6

    Samba vs NFS vs SSHFS ?

    Hi everyone !

    Right now I can't decide wich one is the most versatile and fit my personal needs, so I'm looking into your personal experience with each one of them, if you mind sharing your experience.

    It's mostly for secure shared volumes containing ebooks and media storage/files on my home network. Adding some security into the mix even tough I actually don't need it (mostly for learning process).

    More precisely how difficult is the NFS configuration with kerberos? Is it actually useful? Never used kerberos and have no idea how it works, so it's a very much new tech on my side.

    I would really apreciate some indepth personal experience and why you would considere one over another !

    Thank you !

    62

    sshfs pemission denied on root path folder

    Hello !

    Getting a bit annoyed with permission issues with samba and sshfs. If someone could give me some input on how to find an other more elegant and secure way to share a folder path owned by root, I would really appreciate it !

    Context

    • The following folder path is owned by root (docker volume):

    /var/lib/docker/volumes/syncthing_data/_data/folder

    • The child folders are owned by the user server

    /var/lib/docker/volumes/syncthing_data/_data/folder

    • The user server is in the sudoers file
    • Server is in the docker groupe
    • fuse.confhas the user_allow_other uncommented

    Mount point with sshfs

    sudo sshfs [email protected]:/var/lib/docker/volumes/syncthing_data/_data/folder /home/user/folder -o allow_other

    > Permission denied

    Things I tried

    • Adding other options like gid 0,27,1000 uid 0,27,1000 default_permissions...
    • Finding my way through stackoverflow, unix.stackexchange...

    Solution I found

    1. Making a bind mount from the root owned path to a new path owned by server

    sudo mount --bind /var/lib/docker/volumes/syncthing_data/_data/folder /home/server/folder

    1. Mount point with sshfs

    sshfs [email protected]:/home/server/folder /home/user/folder

    Question

    While the above solution works, It overcomplicates my setup and adds an unecessary mount point to my laptop and fstab.

    Isn't there a more elegant solution to work directly with the user server (which has root access) to mount the folder with sshfs directly even if the folder path is owned by root?

    I mean the user has root access so something like:

    sshfs [email protected]:/home/server/folder /home/user/folder -o allow_other should work even if the first part of the path is owned by root.

    Changing owner/permission of the path recursively is out of question !

    Thank you for your insights !

    14

    Sharing my personal Firefox user.js based on arkenfox's privacy policies.

    Hi everyone :)

    For those interested, I share my just finished personal Firefox user.js. It's based on the latest arkenfox and has the same privacy features, with some personal tweaks to fit my workflow. And also easier to read 😅.

    https://github.com/KalyaSc/fictional-sniffle/blob/main/user.js

    ---

    KEEP IN MIND

    Except for the privacy focused entries, some are personal choices for an easy drop-in Firefox preferences backup. This is what I consider a good privacy model and some entries could break YOUR workflow, especially if you don't have self-hosted alternatives (Vaultwarden, Linkding, Wallabag).

    I'm not an expert, but most of those entries are the same as Arkenfox's user.js. I really encourage you to read their file for better understanding on what each entrie does. While my file is easier to read, one downside is the lack of documentation for each entries.

    Also, this is not just a COPY/PAST. It took a lot of effort, time, reading, testing and understanding. I kept a similar naming scheme for cross referencing.

    I learned a few things and hope that you also will enjoy, edit, read and learn new interesting things.

    Happy hardening !

    ---

    Features

    • Automatic dark mode theme (Keep in mind you still need Dark Reader or similar plugin for web pages in dark mode.)
    • Deep clean history on every Firefox quit. Only cookies as exception are kept. I need them for my self hosted services.
    • Disable password/auto-fill/breache. Vaultwarden takes care of everything.
    • All telemetry disabled by default except for the crash reports. To also disable the crash reports, comment the begining of the following lines with //: user_pref("breakpad.reportURL", ""); user_pref("browser.tabs.crashReporting.sendReport", false); user_pref("browser.crashReports.unsubmittedCheck.enabled", false); user_pref("browser.crashReports.unsubmittedCheck.autoSubmit2", false);
    • DoH disabled (got my personal VPN with DoH enabled) user_pref("network.trr.mode", 5);
    • Disable WebRTC. If you need it for video calling, meetings, video chats:

    Comment the following line: user_pref("media.peerconnection.enabled", false); Uncomment the following (arkenfox default, it will force WebRTC inside your configured proxy) //user_pref("media.peerconnection.ice.default_address_only", true); //user_pref("media.peerconnection.ice.proxy_only_if_behind_proxy", true);

    • FIxed Width and Height (1600x900) (Finger print resistant) arkenfox's default
    • Resist Fingerprinting (RFP) which overrides finger print protection (FPP)
    • Alot of other tweaks you can discover while reading through the file.

    How to use/test this file ?

    Open firefox, type about:profiles and create a test profile. Open the corresponding root folder, put in the user.js and launch profile in a new browser.

    After testing and happy with the result, BACKUP your main Firefox profile somewhere safe and put the user.js in your main profile to see if it fits your workflow.

    Room for improvement / TODO.

    Alot of the settings in the 5000 range form arkenfox's user.js need further testing and investigation, because they could breake and cause performance/stability issues.

    • JS exploits: ```
    • javascript.options.baselinejit
    • javascript.options.ion
    • javascript.options.wasm
    • javascript.options.asmjs ```
    • Disable webAssembly
    • ...

    TODO

    • Disable non-modern cipher suites
    • Control TLS versions
    • Disable SSL session IDs [FF36+]

    Also those settings are another beast that needs further testing/investigation on how they work.

    The user.js file

    https://github.com/KalyaSc/fictional-sniffle/blob/main/user.js

    WARNING

    Arkenfox advise agianst addons who scramble and randomize your fingerprint characteristics (like chameleon).

    WHY? Because resist fingerprint takes care of most things. See 4500: RFP (resistFingerprinting) in arkenfox user.js.

    ``` [WARNING] DO NOT USE extensions to alter RFP protected metrics

    418986 - limit window.screen & CSS media queries (FF41) 1281949 - spoof screen orientation (FF50) 1330890 - spoof timezone as UTC0 (FF55) 1360039 - spoof navigator.hardwareConcurrency as 2 (FF55) FF56 1333651 - spoof User Agent & Navigator API version: android version spoofed as ESR (FF119 or lower) OS: JS spoofed as Windows 10, OS 10.15, Android 10, or Linux | HTTP Headers spoofed as Windows or Android 1369319 - disable device sensor API 1369357 - disable site specific zoom 1337161 - hide gamepads from content ....

    Very long list ! ```

    Final words

    I'm open for any constructive criticism or any constructive comment that could help me out to improve or understand something new or something I misunderstood. Sure that's not 100% my work, but as I said it took a lot of time, testing, searching, reading... Please don't be a crazy Panda...

    Credits

    https://github.com/arkenfox/user.js

    https://github.com/pyllyukko/user.js/

    https://wiki.archlinux.org/title/Firefox/Privacy

    6

    AdguardVPN sketchy DNS requests.

    After the discussion in the following post I dug a bit deeper the rabbit hole.

    While I mostly relied on Exodus to see if an app has trackers in it... I was baffle to see all the sketchy requests it made while dumping the DNS requests with PCAPdroid...

    Over 200 shady requests in a few seconds after login... here's a preview:

    !

    While I don't use AdguardVPN, I have Adguard Home as my DNS server in my homelab... I think It's time to switch to pi-hole !

    Edit: VPN pcapdroid

    !

    !

    8

    NetworkManager: Wireguard VPN connection GUI broken in Gnome?

    Hello again :)

    I'm not talking about a broken wg connection, everything works as expect through the CLI and systemctl.

    But the NetworkManger GUI in Gnome shows my Wireguard connection as it was "not connected" and when I click on the switch it actually disconnects my wg interface.

    Also when I try to edit my connection through

    nmcli connection modify wg0 connection.autoconnect yes

    and restart my wireguard connection with

    systemctl restart wg-quick@wg0

    It recreates a new wireguard interface.

    While everything works as expected with the usual tools (wg-quick, systemctl...) the GUI seems "broken".

    Someone else noticed or is this somehow related to my setup?

    Debian 12 bookworm Gnome nmcli tools 1.42.4

    2

    Tar: what's the implication of the ./ and ./file structure in the tar file?

    Solved

    After interesting/insightful inputs from different users, here are the takeaways:

    • It doesn't have some critical or dangerous impact or implications when extracted
    • It contains the tared parent folder (see below for some neat tricks)
    • It only overwrites the owner/permission if ./ itself is included in the tar file as a directory.
    • Tarbombs are specially crafted tar archives with absolute paths / (by default (GNU) tar strips absolute paths and will throw a warning except if used with a special option –absolute-names or -P)
    • Interesting read: Path-traversal vulnerability (../)

    Some neat trick I learned from the post

    Temporarily created subshell with its own environment: ``` Let’s say you’re in the home directory that’s called /home/joe. You could go something like:

    > (cd bin && pwd) && pwd /home/joe/bin /home/joe

    ``` source

    Exclude parent folder and ./ ./file from tar

    There are probably a lot of different ways to achieve that expected goal:

    (cd mydir/ && tar -czvf mydir.tgz *)

    find mydir/ -printf "%P\n" | tar -czf mytar.tgz --no-recursion -C mydir/ -T - source

    --- The absolute path could overwrite my directory structure (tarbomb) source Will overwrite permission/owner to the current directory if extracted. source

    I'm sorry if my question wasn't clear enough, I'm really doing my best to be as comprehensible as possible :/

    --- Hi everyone !

    I'm playing a bit around with tar to understand how it works under the hood. While poking around and searching through the web I couldn't find an actual answer, on what are the implication of ./ and ./file structure in the tar archive.

    Output 1

    sh sudo find ./testar -maxdepth 1 -type d,f -printf "%P\n" | sudo tar -czvf ./xtractar/tar1/testbackup1.tgz -C ./testar -T -

    ``` #output > tar tf tar1/testbackup1.tgz

    text.tz test my file.txt .testzero test01/ test01/never.xml test01/file.exe test01/file.tar test01/files test01/.testfiles My test folder.txt ```

    Output 2

    sh sudo find ./testar -maxdepth 1 -type d,f | sudo tar -czvf ./xtractar/tar2/testbackup2.tgz -C ./testar -T -

    ``` #output >tar tf tar2/testbackup2.tgz

    ./testar/ ./testar/text.tz ./testar/test ./testar/my ./testar/file.txt ./testar/.testzero ./testar/test01/ ./testar/test01/never.xml ./testar/test01/file.exe ./testar/test01/file.tar ./testar/test01/files ./testar/test01/.testfiles ./testar/My test folder.txt ./testar/text.tz ./testar/test ./testar/my ./testar/file.txt ./testar/.testzero ./testar/test01/ ./testar/test01/never.xml ./testar/test01/file.exe ./testar/test01/file.tar ./testar/test01/files ./testar/test01/.testfiles ./testar/My test folder.txt ```

    The outputs are clearly different and if I extract them both the only difference I see is that the second outputs the parent folder. But reading here and here this is not a good solution? But nobody actually says why?

    Has anyone a good explanation why the second way is bad practice? Or not recommended?

    Thank you :)

    28

    Estimate laptop power consumption (/sys/class/powercap/*/energy_uj)

    Hello everyone !

    I have no idea if I’m in the right community, because it’s a mix of hardware and some light code/command to extract the power consumption out of my old laptop. I need some assistance and if someone way more intelligent than me could check the code and give feedback :)

    Important infos

    • 12 year old ASUS N76 laptop
    • Bare bone server running Debian 12
    • No battery (died long time ago)

    Because I have no battery connected to my laptop It's impossible to use tools like lm-sensors, powerstat, powertop to output the wattage. But from the following ressource I can estimate the power based on the Energy.

    time=1 declare T0=($(sudo cat /sys/class/powercap/*/energy_uj)); sleep $time; declare T1=($(sudo cat /sys/class/powercap/*/energy_uj)) for i in "${!T0[@]}"; do echo - | awk "{printf \"%.1f W\", $((${T1[i]}-${T0[i]})) / $time / 1e6 }" ; done

    While It effectively outputs something, I'm not sure if I can rely on that to estimate the power consumption and if the code is actually correct? :/

    Thanks :).

    Edit:

    My goal is to calculate the power drawn from my laptop without any electric appliance (maybe a worded my question/title wrong?). While It could be easily done with the top package or lm-sensors, this only work by measuring the battery discharge, which in my case is impossible because my laptop is directly connected to the outlet with his power cord (battery died years ago).

    I dug a bit further through the web and found someone who asked the same question on superuser.com. While this gives a different reference point, nobody actually could answer the question.

    This seems a bit harder than I though and is actually related to the /sys/class/powercap/*/energy_uj files and though someone could give me a bit more details on how this works and what the output actually shows.

    This is also related to the power capping framework in the linux kernel? And as per the documentation this is representing the CPU packages current energy counter in micro joules.

    So I came a bit closer in understanding how it works and what it does, even tough I’m still not sure what am I actually looking at :\ .

    16

    Laptop power consumption from outlet.

    Edit:

    Sorry for the bad posting :/. If someone is interested here is my actual post at https://lemmy.ml/post/12594067

    ---

    Hello everyone !

    I have no idea if I’m in the right community, because it’s a mix of hardware and some light code/command to extract the power consumption out of my old laptop. I need some assistance and if someone way more intelligent than me could check the code and give feedback :)

    Important infos

    • 12 year old ASUS N76 laptop
    • Bare bone server running Debian 12
    • No battery (died long time ago)
    • Running a dozens docker containers.

    Because I have no battery connected to my laptop I’m unable to use tools like lm-sensors, powerstat, powertop. But from the following ressource I can estimate the power based on the Energy.

    time=1 declare T0=($(sudo cat /sys/class/powercap/*/energy_uj)); sleep $time; declare T1=($(sudo cat /sys/class/powercap/*/energy_uj)) for i in "${!T0[@]}"; do echo - | awk "{printf \"%.1f W\", $((${T1[i]}-${T0[i]})) / $time / 1e6 }" ; done

    While It effectively outputs something, I'm not sure if I can rely on that to estimate the power consumption.

    Thanks :).

    0

    Terminal navigation and Editors

    Hi everyone :)

    I'm slowly getting used on how to navigate and edit things in the terminal without leaving the keyboard and arrow keys. I'm getting faster and It improved my workflow in the terminal (Yeahhii).

    ctrl + a e f b u k ... alt + f b d ... But yesterday I had such a bad experience while editing a backup bash script with nano. It took me like an hour to completely edit small changes like a caveman and always broke the editor when I used memory reflex terminal shortcuts.

    This really pissed me... I know nano also has minimal/limited shortcuts but having to memorize and switch between different one for different purpose seems like a waste of time.

    I think I tried emacs a few month ago but It didn't clicked. I didn't spend enough time though, tried it for a few minutes and deleted it afterwards. Maybe I should give it a second try?

    I also gave Vim a try, but that session is still open and can't exit (😂 )! Vim seems rather to complex for my workflow, I'm just a self-taught poweruser making his way through linux. Am I wrong?

    Isn't there something more "universal" ? That works everywhere I go the same? Something portable, so I can use it everywhere I go?

    I'm very interested in everyone's thought, insight, personal experience and tip/tricks to avoid what happened yesterday !

    Thanks !

    16

    Golang / self-hosted docker apps.

    First of all, thank you to all the amazing things you do for the self-hoster, FOSS comunity ! We won't be able to have those shiny things without you ! I'm not a dev and have just played arround with python (and I know how most of you feel about it 🤫) so I have very limited knowledge regarding programming languages.

    I know whats a low level language (C, C#, rust?), general scripting tools and even heard about assembly. And it always baffles me how all those coding lines rule and make our microchips communicate and understand each other, but that's another story ! This is about golang !

    ---

    As a self-hoster enthousiast, when I'm looking at a github repository, I always check the programing language used, even though I have no idea if those integrate well with each other or if it's the best programming language for that kind of application.

    And everytime I see golang, It makes me smile and have a feeling it's going to be a good application. I know it also depends on the programmer skills and creativity, but all my self-hosted Go apps works like a charm.

    Traefik is the best example, I never had any issue or strange behavior, except for wrong configuration files on my side,

    Or navidrome a music server compatible with subsonic, also written in go, is working great and fast AF !

    Or Vikunja, the todo app... and many more !

    I'm probably biased because I have no idea of how the programing realm works, but I have the feeling that Golang is a certificate for good working and fast applications. Just to bad it's backed/supported by google (uuhhg)

    Feel free to debate and give me your personal opinion of the Go language, if my feelings are right or Am I just beeing silly :).

    Thanks for reading through 👋

    11