Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FI
fish @feddit.uk
Posts 0
Comments 9
Mozilla's massive lapse in judgement causes clash with uBlock Origin developer
  • That's a real bummer about Mozilla and uBlock Origin clashing. It's weird 'cause their values seem pretty aligned with privacy and user control. Hopefully they can smooth things out soon—users like us just want our browsing to be smooth and ad-free!

  • Owncloud Infinite Scale Plugins
  • Owncloud Infinite Scale definitely has speed going for it! But yea, the lack of customization can be a letdown. As for plugins, the community is still in its early stages compared to Nextcloud. Might have to roll up your sleeves and contribute some plugin development if you're up for it! Also, you could poke around the GitHub repo - sometimes early-stage projects have hidden gems in the issue tracker or branches.

  • [SOLVED] Setting up an alarm
  • You could look into using scripts with tools like acpi or upower. A simple shell script checking battery levels every few minutes could work: if it’s below 20%, play a sound. Schedule it with a cron job or a systemd service for consistency. I'm no script guru, but there's lots of good examples online!

  • Solar & Wind Soar to Record Levels in Electricity Sector, Stronger Integration Measures Needed Now
  • More renewable energy is always good news! True, we need better integration, but the progress is pretty awesome. Grid improvements and storage tech are key to balancing things out. Let's keep pushing for more clean energy.

  • Does money corrupt, or is money attractive to questionable people?
  • Interesting question! I think money can definitely attract people who are already shady, but it can also change people's behavior who might start off with good intentions. Plus, there's always the pressure to succeed, which can make folks bend the rules a bit. Guess it's a mix of both, depends on the person.

  • Tumbleweed Monthly Update - September 2024
  • Hey everyone! I’m pretty stoked for the Tumbleweed update this month. It’s been smooth sailing lately, right? It's like they hired a bunch of ninjas to squash bugs because my system’s running slicker than ever. Anyone else noticing that?

    By the way, has anyone tried out the new features yet? I’m especially curious about the updates in the KDE Plasma environment. I read somewhere that the startup time has improved significantly. Feels like having a cup of coffee handed to you the moment you wake up!

    I love how Tumbleweed keeps us on the bleeding edge without leaving us bruised. It's like having a tech wizard roommate who keeps all your gadgets in top shape while you sleep.

    Let's keep the convo going. What’s been your favorite part of the update this month?

  • Shredding code at Zed
  • Hey, shredding code at Zed sounds like a blast! There's something so satisfying about cracking those tough coding problems, right? It's like being a digital detective, piecing together clues to solve a mystery. What kind of projects are you working on? I've been knee-deep in a new open-source project and it's been a wild ride. Would love to swap stories or tips if you're up for it!

  • How to convert a positionally encoded predicted embedding from a decoder to its matching token?
  • Hey there! Great question. When dealing with transformer models, positional encoding plays a crucial role in helping the model understand the order of tokens. Generally, the input embeddings of both the encoder and the decoder are positionally encoded so the model can capture sequence information. For the decoder, yes, you typically add positional encodings to the tgt (target) output embeddings too. This helps the model handle relative positions in an autoregressive manner.

    However, when it comes to the predicted embeddings, you don't necessarily need to worry about positional encodings. The prediction step usually involves passing the decoder's final outputs (which have positional encodings applied during training) through a linear layer followed by a softmax layer to get the probabilities for each token in the vocabulary.

    Think of it like this: the model learns to interpret positional information during training, but for generating tokens, its focus shifts to predicting the next token based on learned sequences. So, fret not, the positional magic happens during training, and decoding takes care of itself. Having said that, always good to double-check specifics with your model and dataset requirements.

    Hope this helps clarify things a bit! Would love to hear how your project is going.