Skip Navigation
wagesj45 wagesj45 @kbin.social

Great American humorist. C# developer. Open source enthusiast.

XMPP: [email protected] Mastodon: [email protected] Blog: jordanwages.com

Posts 21
Comments 478
A tale of sovereign citizen woe.
  • I think he's talking about that specific account in the group, I think. I've also noticed several other posts with that profile picture of the black guy with a hat and sunglasses sitting in his car.

  • Starting from zero
  • This is important. I dunno about scale, but backups. I started out hosting a chat room on a raspberry pi. It was a fun side project. But then, that became where my friends all hung out. That was the place, so it became important to me. And then the SD card got corrupted. I then moved on to a consumer laptop. It was way more stable, much faster. But if I messed up anything about the installation, I was hosed.

    I very highly suggest using Proxmox, like you say, and setting up automatic backups. And occasionally transfer them to a hard drive. It doesn't matter what kind of virtual CPUs or services you install, [email protected], as long as you have a plan for when something you host becomes important to you and you lose it.

  • Developers, you need to adapt to the new oval phone's specs
  • Oh fuck, I want that. Gimme gimme.

  • MercuryAlloy - Automated Build Service for the Mercury Browser

    github.com GitHub - wagesj45/MercuryAlloy: MercuryAlloy automates the build process for the Mercury browser, harnessing advanced compiler optimizations to deliver a faster, more efficient web experience.

    MercuryAlloy automates the build process for the Mercury browser, harnessing advanced compiler optimizations to deliver a faster, more efficient web experience. - wagesj45/MercuryAlloy

    GitHub - wagesj45/MercuryAlloy: MercuryAlloy automates the build process for the Mercury browser, harnessing advanced compiler optimizations to deliver a faster, more efficient web experience.

    MercuryAlloy automates the build process for the Mercury browser.

    I really like the Mercury browser, but I worried about the browser getting out of date, since releases of the browser seem to be build and released manually. So I threw together a set of scripts and overrides that will allow the build process to run without user interaction and on a schedule. You can modify the subscripts to move your compiled executable anywhere you want (like a web server), as well as send a custom alert upon successful build (like sending the link out via email).

    This is a more technical project, but it has been a fun learning experience.

    0
    The Chlorophyll Queen
  • The Chlorophyll Queen, hero pose, perfect skin, dramatic lighting, colorful rainbow spectrum cinematography, grand epic, fantastical vista, (Movie Still) (Film Still) (Cinematic) (Cinematic Shot) (Cinematic Lighting)

  • Reddit Signs AI Content Licensing Deal Ahead of IPO
  • What makes you think that?

  • Reddit Signs AI Content Licensing Deal Ahead of IPO
  • Fair. The rest of the site is a lot more normal. More being a relative term, of course.

  • Any other greens feel under attack recently from democrats in the United States?
  • Unfortunately the US is just... very right wing. Even the democrats are right of center on most things. This is just a conservative country. The democrats are about as left as you get in this country with any mainstream support, no matter how much we wish it weren't so.

    And I know man, I hate it, and I'm not going to lecture anyone that votes their conscious. If you don't want to vote for democrats, and Biden specifically, I can't blame you. It bewilders me that the most we can get from him over a goddamn genocide is "that's a little much, Jack." If you can't vote for that I get it.

    Just don't fall into the trap of thinking that the US is more left leaning than it is. We might win on issue-to-issue polls, but when it comes down to it we're a selfish nation that has bought into the temporarily embarrassed millionaire meme. And I don't think that's just the pessimism talking. We have generations worth of work ahead of us.

  • Judge Cannon orders Jack Smith to turn over controversial info to Trump: court filing
  • That's how the auto-fill works on kbin when you add a new link. I'm not sure if it uses the summary provided by the metadata of the site, or if it pulls x amount of words/characters.

  • Judge Cannon orders Jack Smith to turn over controversial info to Trump: court filing
  • No, when you post a new link on kbin, it autofills the text section of the post along with the title. Try it yourself. Copy the link, click the plus sign at the top, click "Add new link", and paste in the URL.

  • Junior Dev VS Machine Learning
  • And hundreds of thousands of years of evolution pre-training the base model that their experience was layered on top of.

  • Mercury - a Firefox fork with compiler optimizations
  • Any reasons why you can't recommend it?

  • Mercury - a Firefox fork with compiler optimizations
  • Interesting, because I saw a 20 point increase between vanilla Firefox and Mercury when testing last night.

  • [Community challenge 22] Busy Bees: Animals with Jobs
  • Makeup Artist

    Prompt: anthropomorphized bee working as a (makeup artist:1.2), illustrStyle
    Negative Prompt: ac_neg1 ac_neg2 negativeXL_D unaestheticXL_Sky3.1

  • Stable Video Diffusion img2vid XT 1.1 Released
  • I'm not entirely sure how this would help with that, though. Is the model watermarked with your info or something? If you post the result anonymously, I don't know how you'd track it back to someone that submitted their info here.

  • UPDATE: Suspect In Custody After Beheading

    levittownnow.com UPDATE: Suspect In Custody After Beheading - LevittownNow.com

    Middletown Township Chief of Police Joseph Bartorilla confirmed the suspect in the death was arrested just after 9 p.m. Tuesday.

    UPDATE: Suspect In Custody After Beheading - LevittownNow.com
    6
    Generative AI @kbin.social wagesj45 @kbin.social

    Corporate vs Local

    Ah, the power of local models. Would DALL-E produce a better version? Probably. If it would produce one at all, that is.

    I can kind of see why this happened, but does anyone really think there is a risk of DALL-E spitting out a reasonable instruction manual on how to build a nuclear bomb?

    0

    Frog with Eyes (NOT) Closed

    I tried to get SD-XL to generate an image of a frog with its eyes closed. It refused. I even cranked up the attention on closed to an absurd level, and it seemed to get sassy with me.

    13
    /kbin meta @kbin.social wagesj45 @kbin.social

    Blocking and Downvote Stalking

    Should blocking a user still allow them to vote on your posts? I'd rather have nothing to do with particular users, and it seems that they continue to show up in the activity for every single post I make around kbin.

    23

    Ham Solo

    Stable Diffusion XL

    Prompt: Ham Solo from Star Wars!

    0
    AskWomen @kbin.social wagesj45 @kbin.social

    How's it going?

    Having a good day?

    1

    Mistral 7B Released Under Apache 2.0

    mistral.ai Mistral 7B

    The best 7B model to date, Apache 2.0

    Mistral 7B

    From their website ==========

    Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date.

    Mistral 7B in short ----------

    Mistral 7B is a 7.3B parameter model that:

    • Outperforms Llama 2 13B on all benchmarks
    • Outperforms Llama 1 34B on many benchmarks
    • Approaches CodeLlama 7B performance on code, while remaining good at English tasks
    • Uses Grouped-query attention (GQA) for faster inference
    • Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost

    We’re releasing Mistral 7B under the Apache 2.0 license, it can be used without restrictions.

    Mistral 7B is easy to fine-tune on any task. As a demonstration, we’re providing a model fine-tuned for chat, which outperforms Llama 2 13B chat.

    0
    jordanwages.com Retouching Skin in GIMP

    Learn how to easily remove blemishes on skin for free using the GIMP photo editor.

    Retouching Skin in GIMP

    When editing photos, one of the first things you'll want to do is "fix" the skin of your subjects. I've fumbled around with this for years. I use almost exclusively open source tools like GIMP for my work, which while extremely powerful, often lack most of those automatic tools and niceties present in paid products like Photoshop. So I'll share my method, which I adopted from this YouTube video.

    0
    Selfhosting @kbin.social wagesj45 @kbin.social

    Share your network naming conventions!

    Depending on how much you self host, you may find it hard to keep track of your devices' host names. So what are your naming conventions to keep track everything? Some people stick to descriptive names, others pick themes, like Greek mythology.

    Personally, I use Japanese emperors. I've made it all the way to Seinei. Luckily I still have some breathing room to add more services and servers. Much to my wife's chagrin. :)

    11
    jordanwages.com Turning Any Stable Diffusion Model Into an Inpainting Model

    In the spirit of full disclosure, the content of this post is heavily cribbed from this post on Reddit. However,…

    Full Post Text

    ---

    In the spirit of full disclosure, the content of this post is heavily cribbed from this post on Reddit. However, as we've seen, the Internet is not forever. It is entirely possible that a wealth of knowledge could be lost at any time due to any number of reasons. Because I have found this particular post so helpful and find myself coming back to it over and over, I thought it would be appropriate to share this method of creating an inpainting model from any custom stable diffusion model.

    Inpainting models, like the name suggests, are specialized models that excel at "filling in" or replacing sections of an image. They're especially good at decoding what a section of an image should look like based on the section of image that already exists. This is useful when you're generating images and only a small section needs to be corrected, or if you're trying to add something specific to an image that exists.

    So how is this done? With a model merge. Automatic1111 has an excellent model merging tool that we'll use. Let's assume that you have a custom model called my-awesome-model.ckpt that is based on stable-diffusion-1.5.

    In the A1111 Checkpoint Merger interface, follow these steps:

    1. Set "Primary model (A)" to stable-diffusion-1.5-inpainting.ckpt.

    2. Set "Secondary model (B)" to my-awesome-model.ckpt.

    3. Set "Tertiary model (C)" to stable-diffusion-1.5.ckpt.

    4. Set "Multiplier (M)" to 1.

    5. Set "Interpolation Method" to Add difference.

    6. Give your model a name in the "Custom Name" field, such as my-awesome-model-inpainting.ckpt.

      • Adding "-inpainting" will signal to A1111 that the model is an inpainting model. This is useful for extensions such as openOutpaint. Also, it's just a good idea to properly label your models. Because we know you're a degenerate that has hundreds of custom waifu models downloaded from CivitAI.
    7. Click Merge.

    And bazinga! You have your own custom inpainting stable diffusion model. Thanks again to /u/MindInTheDigits for sharing the process.

    1

    Alternate Universe Computers and Devices from the 1990's

    mastodon.jordanwages.com jordan (@[email protected])

    Attached: 1 image what i wouldn't give to own one of these things. #ai #retrocomputing #retrofuturism #imaginarydevices #stablediffusion

    This is a Mastodon thread I created featuring devices and computers that never were, but could have been. I think my favorite might be the HD Laserdisc player called the MOID.

    https://mastodon.jordanwages.com/system/media\_attachments/files/110/686/620/475/187/344/original/95ff43d76ba41dd6.jpg

    2
    Selfhosting @kbin.social wagesj45 @kbin.social

    Advice Wanted - Homelab with Industrial GPUs

    I just bought a "new" homelab server and am considering adding in some used/refurbished NVIDIA Tesla K80s. They have 24 GB of VRAM and tons of compute power for very cheap if you get them used.

    The issue is that these cards run super hot and require extra cooling set ups. I was able to find this fan adapter kit on eBay. But I still worry that if I pop one or two of these bad boys in my server that the fan won't be enough to overcome the raw heat put off by the K80.

    Have any of you run this kind of card in a home lab setting? What kind of temps do you get when running models? Would a fan like this actually be enough to cool the thing? I appreciate any insight you guys might have!

    3

    Advice Wanted - Self Hosting Industrial GPUs

    I just bought a "new" homelab server and am considering adding in some used/refurbished NVIDIA Tesla K80s. They have 24 GB of VRAM and tons of compute power for very cheap if you get them used.

    The issue is that these cards run super hot and require extra cooling set ups. I was able to find this fan adapter kit on eBay. But I still worry that if I pop one or two of these bad boys in my server that the fan won't be enough to overcome the raw heat put off by the K80.

    Have any of you run this kind of card in a home lab setting? What kind of temps do you get when running models? Would a fan like this actually be enough to cool the thing? I appreciate any insight you guys might have!

    7
    Gaming @kbin.social wagesj45 @kbin.social
    ocremix.org OverClocked ReMix: Video Game Music Community

    OverClocked ReMix is a video game music community with tons of fan-made ReMixes and information on video game music.

    OverClocked ReMix is a video game music community with tons of fan-made ReMixes and information on video game music.

    ---

    I figured since we're currently in a reawakening to decentralized and special purpose forums and websites, now might be a good time to remind people of OCRemix. Or help someone discover it for the first time.

    13

    Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length

    blog.salesforceairesearch.com Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length

    TLDR We trained a series of 7B LLMs named XGen-7B with standard dense attention on up to 8K sequence length for up to 1.5T tokens. We also fine tune the models on public-domain instructional data. The main take-aways are: * On standard NLP benchmarks, XGen achieves comparable or better results

    TLDR We trained a series of 7B LLMs named XGen-7B with standard dense attention on up to 8K sequence length for up to 1.5T tokens. We also fine tune the models on public-domain instructional data. The main take-aways are: \* On standard NLP benchmarks, XGen achieves comparable or better results

    0