Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SH
  • who popularized breakfast

    www.tiktok.com Tray G The Great 🤴🏾 on TikTok

    Is your lifestyle rooted in propaganda? 🤔 The idea of the standard American breakfast was a psychological tactic that the meat & dairy industry used to grow, no matter what damage it may have caused… The digestive system utilizes A LOT of energy. What do you think happens when you overwork it at th...

    The history of breakfast is a complex one, with different foods and customs evolving over time. Here are some key points from the search results:

    • In Ancient Egypt, peasants ate a daily meal, most likely in the morning, consisting of soup, beer, bread, and onions before they left for work in the fields or work commanded by the pharaohs[1].
    • Prior to the 16th century, breakfast in Great Britain typically included bread, cold meat or fish, and ale. Tea, chocolate and coffee were introduced to Great Britain in the 16th century, and by the 1550s, there were multiple sources that claimed breakfast was an important meal[1].
    • Historian Ian Mortimer suggests the Tudors invented modern breakfasts in the 16th century as a side-effect of inventing the concept of employment. As people increasingly came to work for an employer, rather than working for themselves on their own land, they lost control of their time, and had to work long, uninterrupted days without sustenance. A big breakfast allowed them to work longer days[2].
    • The Industrial Revolution and the move from farms to factories formalized the idea of breakfast further, and now it's normal for everyone to eat breakfast before going to work[2].
    • Cereal was invented by a man called James Kellogg in the 1890s as a healthy solution to complaints of weight gain and indigestion. But this is not the only food that has been associated with breakfast over time[6].
    • Historians tend to agree that breakfast became a daily, first thing in the morning institution once workers moved to cities and became employees who worked set schedules. In Europe, this first began in the 1600s, and breakfast achieved near ubiquity during the Industrial Revolution[3].
    • The term "breakfast" originated in 15th century Europe, but eating in the morning didn't become a daily ritual in Europe until the 1600s as this is when many people began going to work each day[5].
    • The modern era of breakfast begins with cereal. Before its invention, breakfast was not as standard or routine. "The Romans believed it was healthier to eat only one meal a day," food historian Caroline Yeldham wrote[3].
    • People of the Middle Ages shunned breakfast on roughly the same grounds that people of the current age glorify it; later, those navigating the collision of industrialization and the needs of the human body came to blame hearty breakfasts for indigestion and other ailments[4].

    Overall, it seems that the concept of breakfast has evolved over time due to a variety of factors, including changes in work patterns, the introduction of new foods, and shifting cultural attitudes towards eating in the morning. While it's difficult to pinpoint one person or group who "popularized" breakfast, it's clear that the meal has a long and fascinating history.

    Citations: [1] https://en.wikipedia.org/wiki/Breakfast [2] https://www.mashed.com/66838/secret-history-breakfast/ [3] https://priceonomics.com/how-breakfast-became-a-thing/ [4] https://www.theatlantic.com/entertainment/archive/2016/06/breakfast-the-most-contentious-meal-of-the-day/487220/ [5] https://www.fridaysocks.com/blogs/news/the-history-of-bacon-and-eggs [6] https://www.indiatimes.com/lifestyle/a-brief-history-of-breakfast-why-you-should-skip-it-370307.html

    2
  • Why is the "I have nothing to hide" argument flawed?

    1. Privacy is about protecting information: Privacy is not just about hiding information, but rather about protecting it. Even if you think you have nothing to hide, there may still be information that you would like to keep private, such as personal conversations, financial transactions, or medical records[1].

    2. Data can be used against you: Even seemingly insignificant pieces of personal data can be used in ways you may not anticipate. Companies can use your data to build detailed profiles about you, which can be used for targeted advertising, price discrimination, or even manipulation[1].

    3. Potential for abuse: When your data is collected and stored by companies or governments, there is always the risk of it being misused or falling into the wrong hands. Data breaches and leaks are common, and your personal information could be exposed, leading to identity theft or other forms of harm[1].

    4. Chilling effect on freedom: Mass surveillance and the erosion of privacy can have a chilling effect on freedom of expression and individual autonomy. When people feel constantly monitored, they may self-censor or refrain from engaging in certain activities, limiting their freedom[3].

    5. Power imbalances: The collection and analysis of vast amounts of personal data can lead to power imbalances between individuals and those who control the data. This can result in discrimination, manipulation, and the concentration of power in the hands of a few[3].

    6. Government overreach: Mass surveillance and the collection of personal data by governments can lead to abuses of power and violations of civil liberties. It is important to hold governments accountable and ensure that they respect individuals' right to privacy[3].

    In summary, the "I have nothing to hide" argument overlooks the broader implications of data privacy and the potential risks and harms associated with the collection and use of personal data. Privacy is a fundamental right that should be protected, regardless of whether you have something to hide or not.

    Citations: [1] https://spreadprivacy.com/three-reasons-why-the-nothing-to-hide-argument-is-flawed/ [2] https://www.reddit.com/r/privacy/comments/3hynvp/how_do_you_counter_the_i_have_nothing_to_hide/ [3] https://www.amnesty.org/en/latest/campaigns/2015/04/7-reasons-why-ive-got-nothing-to-hide-is-the-wrong-response-to-mass-surveillance/ [4] https://teachprivacy.com/is-there-a-good-response-to-the-nothing-to-hide-argument/ [5] https://www.techdirt.com/2011/05/26/privacy-is-not-secrecy-debunking-if-youve-got-nothing-to-hide-argument/ [6] https://www.wired.com/2013/06/why-i-have-nothing-to-hide-is-the-wrong-way-to-think-about-surveillance/

    0
  • vscode enforce python type annotation

    To enable Python type checking in Visual Studio Code, you can use the Microsoft extension Pylance[1]. Pylance provides a set of useful features powered with Pyright, the Microsoft static type checking tool. With the extension installed and enabled, you should now have better IntelliSense with typing information when you are calling some package function, for example. For the type checking analysis, it is not enabled by default, you need to configure it by yourself. In your settings.json file, add a new line with the following setting:

    json { "python.analysis.typeCheckingMode": "basic" }

    The default value for this line is off, meaning the type checking is disabled. You can also set the value to "strict" or "off" depending on your needs[1].

    Alternatively, you can use the Python Type Hint extension from the Visual Studio Marketplace[2]. This extension provides type hint auto-completion for Python, with completion items for built-in types, classes, and the typing module. It also estimates the correct type to provide as a completion item and can search Python files in the workspace for type estimation purposes[2].

    Another option is to use the Pyright static type checker for Python, which is a full-featured, standards-based static type checker for Python. It is designed for high performance and can be used with large Python source bases. Pyright includes both a command-line tool and an extension for Visual Studio Code[6].

    To configure real-time static types checking in VS Code for Python projects, you can set up Python to behave like a static typed language. You can configure Venv and add GitLens extension, then install GitLens extension for VS Code. Next, add these settings in your local settings.json:

    json { "python.linting.mypyEnabled": true, "python.linting.mypyArgs": [ "--ignore-missing-imports", "--follow-imports=silent", "--show-column-numbers", "--allow-untyped-defs", "--allow-subclassing-any" ] }

    This setup only shows errors when you explicitly add the type of variable or function parameter in your code while still allowing for normal variables[4].

    Citations: [1] https://www.emmanuelgautier.com/blog/enable-vscode-python-type-checking [2] https://marketplace.visualstudio.com/items?itemName=njqdev.vscode-python-typehint [3] https://youtube.com/watch?v=hHBp0r4w86g [4] https://dev.to/jodaut/python-type-checking-with-visual-studio-code-46a7 [5] https://stackoverflow.com/questions/45829353/python-type-checking-in-vs-code [6] https://github.com/microsoft/pyright

    2
  • Tingling and palpitations spreading throughout the body

    The tingling and palpitations that have been spreading throughout the body can be caused by several conditions. Here are some possibilities based on the search results:

    • Episodic Neurologic Symptoms: Patients with this condition usually complain of severe apprehension, dizziness, chest discomfort, palpitations, or fear of "losing control." The onset is usually under age 40[1].

    • Heart Attack or Panic Attack: Both conditions can cause chest pain, shortness of breath, dizziness, sweating, passing out, tingling, or a sensation of impending doom. These shared symptoms of heart and panic attack may also be caused by other serious conditions such as blood clots, lung infection or collapse, or tear in the large vessels of the chest for patients with certain pre-existing risk factors[2].

    • Atrial Fibrillation: The most obvious symptom of this condition is heart palpitations, where the heart feels like it's pounding, fluttering, or beating irregularly, often for a few seconds or possibly a few minutes. Other symptoms include tiredness, feeling lethargic, chest pain, dizziness, and shortness of breath[3][6].

    • Peripheral Neuropathy: Symptoms of this condition include numbness and tingling in the feet or hands, burning, stabbing or shooting pain in affected areas, loss of balance and coordination, and muscle weakness, especially in the feet. These symptoms are usually constant but may come and go[4].

    It's important to see a doctor if you experience any of these symptoms to determine the underlying cause and receive appropriate treatment.

    Citations: [1] https://www.ncbi.nlm.nih.gov/books/NBK374/ [2] https://www.cedars-sinai.org/blog/is-it-a-heart-attack-or-a-panic-attack.html [3] https://www.nhs.uk/conditions/atrial-fibrillation/symptoms/ [4] https://www.nhsinform.scot/illnesses-and-conditions/brain-nerves-and-spinal-cord/peripheral-neuropathy [5] https://www.webmd.com/heart-disease/heart-palpitations-emergency [6] https://www.hopkinsmedicine.org/health/conditions-and-diseases/atrial-fibrillation/afib-symptoms

    0
  • How to download a website for offline browsing using wget?

    To download a website for offline browsing using wget, you can use the following command:

    bash wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --no-parent http://www.example.com

    Replace example.com with the website you want to download. This command will download the entire website, including all the elements that compose the page (images, CSS, etc.), and convert the links so that they work locally and offline[1].

    Here's a brief explanation of the options used in the command:

    • --recursive: Download the entire website recursively.
    • --no-clobber: Don't overwrite existing files.
    • --page-requisites: Download all the elements required to display the page properly (images, CSS, etc.).
    • --html-extension: Save files with the .html extension.
    • --convert-links: Convert links so that they work locally and offline.
    • --restrict-file-names=windows: Modify filenames so that they work in Windows as well.
    • --domains example.com: Don't follow links outside the specified domain.
    • --no-parent: Don't follow links outside the specified directory.

    If the website uses external resources like jQuery, you can try using the --span-hosts option to download resources from other domains as well:

    bash wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --span-hosts --no-parent http://www.example.com

    However, this may also download additional content from other domains that you might not need. To avoid downloading unnecessary content, you can manually download the required resources (like jQuery) and update the HTML files to use the local copies instead.

    Keep in mind that some websites may have measures in place to prevent downloading their content using tools like wget. In such cases, you may need to adjust the command options or use alternative methods to download the website for offline browsing[6].

    Citations: [1] https://www.linuxjournal.com/content/downloading-entire-web-site-wget [2] https://winaero.com/make-offline-copy-of-a-site-with-wget-on-windows-and-linux/amp/ [3] https://stackoverflow.com/questions/10842263/wget-download-for-offline-viewing-including-absolute-references [4] https://askubuntu.com/questions/391622/download-a-whole-website-with-wget-or-other-including-all-its-downloadable-con [5] https://superuser.com/questions/970323/using-wget-to-copy-website-with-proper-layout-for-offline-browsing [6] https://www.computerhope.com/unix/wget.htm [7] https://superuser.com/questions/1672776/download-whole-website-wget [8] https://gist.github.com/stvhwrd/985dedbe1d3329e68d70 [9] https://simpleit.rocks/linux/how-to-download-a-website-with-wget-the-right-way/ [10] https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-of-a-site-using-wget/ [11] https://linuxreviews.org/Wget:_download_whole_or_parts_of_websites_with_ease [12] https://brain-dump.space/articles/how-to-get-full-offline-website-copy-using-wget-on-mac-os/ [13] https://dev.to/jjokah/how-to-download-an-entire-website-for-offline-usage-using-wget-2lli [14] https://alvinalexander.com/linux-unix/how-to-make-offline-mirror-copy-website-with-wget [15] https://askubuntu.com/questions/979655/using-wget-and-having-websites-working-properly-offline

    7
  • limit python requests to 1.39 requests per second

    To limit your Python requests to 1.39 requests per second, you can use the time.sleep() function from the time module to introduce a delay between requests. You can also use third-party libraries like pyrate-limiter, requests-ratelimiter, or ratelimiter to achieve rate limiting.

    Here's an example using the time module:

    ```python import requests import time

    urls = [...] # List of URLs to send requests to

    for url in urls: response = requests.get(url) print(response.status_code) time.sleep(1 / 1.39) # Introduce a delay to achieve 1.39 requests per second ```

    Alternatively, you can use the pyrate-limiter library:

    ```python from pyrate_limiter import Duration, RequestRate, Limiter import requests

    rate_limit = RequestRate(1, Duration.SECOND / 1.39) # 1 request per 1.39 seconds limiter = Limiter(rate_limit)

    urls = [...] # List of URLs to send requests to

    for url in urls: with limiter: response = requests.get(url) print(response.status_code) ```

    Another option is to use the requests-ratelimiter library:

    ```python from requests import Session from requests_ratelimiter import LimiterAdapter

    session = Session()

    Apply a rate limit of 1.39 requests per second to all requests

    adapter = LimiterAdapter(per_second=1.39) session.mount('http://', adapter) session.mount('https://', adapter)

    urls = [...] # List of URLs to send requests to

    for url in urls: response = session.get(url) print(response.status_code) ```

    These examples demonstrate different ways to limit your Python requests to 1.39 requests per second. Choose the method that best suits your needs and project requirements.

    Citations: [1] https://stackoverflow.com/questions/26098711/limiting-number-of-http-requests-per-second-on-python [2] https://pypi.org/project/requests-ratelimiter/ [3] https://github.com/vutran1710/PyrateLimiter [4] https://akshayranganath.github.io/Rate-Limiting-With-Python/ [5] https://stackoverflow.com/questions/40748687/python-api-rate-limiting-how-to-limit-api-calls-globally [6] https://pypi.org/project/limiter/ [7] https://github.com/JWCook/requests-ratelimiter [8] https://levelup.gitconnected.com/implement-rate-limiting-in-python-d4f86b09259f [9] https://limits.readthedocs.io [10] https://github.com/encode/httpx/issues/815 [11] https://365datascience.com/tutorials/python-tutorials/limit-rate-requests-web-scraping/ [12] https://www.seelk.co/blog/efficient-client-side-handling-of-api-throttling-in-python-with-tenacity [13] https://www.cisco.com/c/en/us/support/docs/security/firepower-ngfw/217900-troubleshoot-firepower-threat-defense-an.html [14] https://scrapfly.io/blog/how-to-rate-limit-asynchronous-python-requests/ [15] https://dev.to/paymon123/the-easiest-way-to-rate-limit-a-python-api-3njc [16] https://cloud.google.com/python/docs/reference/storage/1.39.0/retry_timeout [17] https://medium.com/clover-platform-blog/conquering-api-rate-limiting-dcac5552714d [18] https://subscription.packtpub.com/book/web_development/9781838983994/9/ch09lvl1sec72/api-rate-limiting [19] https://towardsdatascience.com/speeding-up-python-code-fast-filtering-and-slow-loops-8e11a09a9c2f [20] https://katiekodes.com/python-wrap-requests-functions/ [21] https://www.reddit.com/r/Python/comments/12xahnb/i_built_a_simple_and_efficient_rate_limiter_for/ [22] https://docs.alcf.anl.gov/theta/performance-tools/craypat/ [23] https://coderpad.io/blog/development/a-guide-to-api-rate-limiting-in-django/

    4
  • How to Get a List of All Issues for a Repo from the GitHub API using a Personal Access Token?

    To get a list of all issues for https://github.com/LemmyNet/lemmy from the GitHub API, you can use the following steps:

    1. First, you need to obtain a personal access token from GitHub. You can follow the instructions given in this link to create a personal access token: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

    2. Once you have obtained the personal access token, you can use it to authenticate your requests to the GitHub API. You can use any HTTP client library to make requests to the GitHub API. In this example, we will use the requests library in Python.

    3. To get a list of all issues for https://github.com/LemmyNet/lemmy, you can make a GET request to the following URL:

    https://api.github.com/repos/LemmyNet/lemmy/issues

    1. You need to pass the personal access token as a header in the request. The header should be of the form:

    Authorization: Bearer <personal-access-token>

    Here's an example Python code snippet that demonstrates how to get a list of all issues for https://github.com/LemmyNet/lemmy using the GitHub API:

    ```python import requests

    Replace <personal-access-token> with your personal access token

    headers = { 'Authorization': 'Bearer <personal-access-token>', }

    url = 'https://api.github.com/repos/LemmyNet/lemmy/issues'

    response = requests.get(url, headers=headers)

    Print the response content

    print(response.content) ```

    This code snippet makes a GET request to the GitHub API to get a list of all issues for https://github.com/LemmyNet/lemmy. The personal access token is passed as a header in the request. The response content is printed to the console. You can modify this code snippet to suit your needs.

    Citations: [1] https://github.com/LemmyNet/lemmy [2] https://github.com/LemmyNet/lemmy/issues [3] https://github.com/LemmyNet/lemmy/issues/2937 [4] https://github.com/LemmyNet/lemmy-docs/issues [5] https://github.com/LemmyNet/lemmy/issues/3366 [6] https://github.com/LemmyNet/lemmy/issues/1368

    1
  • How to avoid defining sqlite3 conn and cursor in many functions across the code?

    To avoid defining the sqlite3.Connection and sqlite3.Cursor in many functions throughout your code, you can use the following approaches:

    1. Use a context manager: Create a context manager that handles the connection and cursor creation, as well as closing the connection when done. This way, you can use the with statement to manage the connection and cursor in your functions.

    ```python import sqlite3

    DB_FILE = "your_database_file.db"

    class DatabaseConnection: def enter(self): self.conn = sqlite3.connect(DB_FILE) self.cursor = self.conn.cursor() return self.cursor

    def exit(self, exc_type, exc_val, exc_tb): self.conn.commit() self.cursor.close() self.conn.close()

    def insert_post_to_db(issue: Issue, lemmy_post_id: int) -> None: with DatabaseConnection() as cursor: cursor.execute( "INSERT INTO posts (issue_url, lemmy_post_id, issue_title, issue_body) VALUES (?, ?, ?, ?)", (issue.url, lemmy_post_id, issue.title, issue.formatted_body), ) ```

    1. Use a function decorator: You can create a decorator that handles the connection and cursor creation and passes the cursor to the decorated function.

    ```python import sqlite3 from functools import wraps

    DB_FILE = "your_database_file.db"

    def with_cursor(func): @wraps(func) def wrapper(*args, **kwargs): conn = sqlite3.connect(DB_FILE) cursor = conn.cursor()

    result = func(cursor, *args, **kwargs)

    conn.commit() cursor.close() conn.close()

    return result

    return wrapper

    @with_cursor def insert_post_to_db(cursor: sqlite3.Cursor, issue: Issue, lemmy_post_id: int) -> None: cursor.execute( "INSERT INTO posts (issue_url, lemmy_post_id, issue_title, issue_body) VALUES (?, ?, ?, ?)", (issue.url, lemmy_post_id, issue.title, issue.formatted_body), ) ```

    Both of these approaches help you avoid defining the connection and cursor in every function that interacts with the database. The context manager approach is more explicit and easier to read, while the function decorator approach is more concise. Choose the one that best fits your coding style and project requirements.

    Citations: [1] https://stackoverflow.com/questions/14511337/efficiency-of-reopening-sqlite-database-after-each-query [2] https://stackoverflow.com/questions/50075325/python-sqlite3-nested-cursor-execute [3] https://blog.udemy.com/python-sqlite/ [4] https://stackoverflow.com/questions/54395773/what-are-the-side-effects-of-reusing-a-sqlite3-cursor [5] https://pynative.com/python-sqlite/ [6] https://arctype.com/blog/guide-sqlite-python/ [7] https://sqlite.org/forum/info/4393a42b3b5e2382 [8] https://docs.python.org/3/library/sqlite3.html [9] https://www.reddit.com/r/learnpython/comments/94i4k9/using_a_global_sqlite_cursor_across_multiple/ [10] https://stackoverflow.com/questions/9561832/what-if-i-dont-close-the-database-connection-in-python-sqlite [11] https://climbtheladder.com/10-python-sqlite-best-practices/ [12] https://pypi.org/project/cuttlepool/ [13] https://www.sitepoint.com/sqlite-python/ [14] https://pyneng.readthedocs.io/en/latest/book/25_db/sqlite3.html [15] https://www.geeksforgeeks.org/python-sqlite-connecting-to-database/ [16] https://towardsdatascience.com/python-sqlite-tutorial-the-ultimate-guide-fdcb8d7a4f30 [17] https://codereview.stackexchange.com/questions/285730/simple-connection-pool-for-sqlite-in-python [18] https://developer.android.com/training/data-storage/sqlite [19] https://www.blog.pythonlibrary.org/2021/09/30/sqlite/ [20] https://www.digitalocean.com/community/tutorials/how-to-use-the-sqlite3-module-in-python-3 [21] https://developer.android.com/topic/performance/sqlite-performance-best-practices [22] https://www.reddit.com/r/learnpython/comments/8tkbor/how_does_sqlalchemy_connection_pooling_work_with/ [23] https://pymotw.com/2/sqlite3/ [24] https://vegibit.com/interact-with-databases-using-the-python-sqlite3-module/ [25] https://blog.rtwilson.com/a-python-sqlite3-context-manager-gotcha/ [26] https://remusao.github.io/posts/few-tips-sqlite-perf.html [27] https://www.digitalocean.com/community/tutorials/how-to-use-an-sqlite-database-in-a-flask-application [28] https://www.tutorialspoint.com/sqlite/sqlite_python.htm [29] https://www.sqlite.org/whentouse.html [30] https://rogerbinns.github.io/apsw/execution.html [31] https://stackoverflow.com/questions/42635749/sqlite-database-connection-best-practice [32] https://realpython.com/python-mysql/

    0
  • Write script to repost github issues on lemmy

    Use pythorhead (https://github.com/db0/pythorhead) to write a python script to write every github issue from https://github.com/LemmyNet/lemmy and https://github.com/LemmyNet/lemmy-ui to https://lemm.ee/c/issue_tracker

    Follow this template for the posts

    URL: https://github.com/LemmyNet/lemmy-ui/issues/number

    Title: [UI][Feature Request] Title

    Body: Use CommonMark markdown to quote the body and add a line saying:

    > Originally posted by user in #1930

    Edit: the fixed script is here

    7
  • Why are we stuck with bash programming language in the shell?

    Bash-like scripting has become ubiquitous in operating systems, and it makes me wonder about its widespread adoption despite lacking certain programming conveniences found in other languages. While it may not be the ideal choice for large-scale software development, the bash shell possesses unique features that make it well-suited for interactive command-line processing, including pipelining, text manipulation, and file handling. Why isn't there an alternative that combines the best of bash's command-line capabilities with the robustness and scalability of traditional programming languages. Why do even new operating systems, such as Redox OS, opt for a similar syntax rather than a completely different programming language?

    Here are some of the limitations I find in Bash compared to other programming languages:

    1. Syntax and Expressiveness:

      • Bash has a relatively simple syntax compared to other programming languages. It lacks some advanced language features such as object-oriented programming, complex data structures, and advanced control flow constructs.
      • The syntax of Bash can be less intuitive and more error-prone, especially for complex tasks or larger projects.
    2. Performance:

      • Bash scripts can be slower compared to compiled languages like C or Java. This is because Bash is an interpreted language, and each line of code is interpreted at runtime.
      • Bash may not be the best choice for computationally intensive tasks or applications that require high performance.
    3. Error Handling and Debugging:

      • Error handling and debugging in Bash can be challenging. Bash does not provide robust error handling mechanisms, and error messages can be cryptic and difficult to interpret.
      • Debugging Bash scripts can be cumbersome, as there is limited tooling and debugging support compared to other programming languages.
    4. Portability:

      • While Bash is available on most Unix-like systems, it may not be available on all platforms or versions. This can limit the portability of Bash scripts.
      • Bash scripts may not work as expected on non-Unix systems or require modifications to run on different platforms.
    5. Limited Standard Library:

      • Bash has a limited standard library compared to other programming languages. It lacks comprehensive libraries for tasks such as networking, database access, or advanced data manipulation.
      • Bash often relies on external tools or utilities to perform complex operations, which can introduce dependencies and compatibility issues.
    6. Lack of Modularity and Reusability:

      • Bash scripts can become monolithic and difficult to maintain as they grow in size. Bash does not provide strong mechanisms for modularization or code reuse.
      • Reusing code or creating libraries in Bash can be challenging, leading to code duplication and decreased maintainability.
    3
  • How much exercise should a highly sedentary adult do?

    A highly sedentary adult aged 18-64 should aim to gradually increase their physical activity levels to meet the recommended guidelines. According to the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC), adults in this age range should:

    1. Do at least 150-300 minutes of moderate-intensity aerobic physical activity (e.g., brisk walking) or at least 75-150 minutes of vigorous-intensity aerobic physical activity (e.g., jogging or running) per week, or an equivalent combination of both[1][3].
    2. Engage in muscle-strengthening activities at moderate or greater intensity that involve all major muscle groups on 2 or more days a week[1][3].
    3. Limit the amount of time spent being sedentary and replace sedentary time with physical activity of any intensity, including light intensity, for health benefits[3].

    It's important to note that some physical activity is better than none, and adults who sit less and engage in any amount of moderate-to-vigorous intensity physical activity will gain health benefits[2]. As a highly sedentary adult, you should start with small, manageable increments of physical activity and gradually increase the duration, frequency, and intensity as your fitness level improves. It's also a good idea to consult with a healthcare professional before starting a new exercise program, especially if you have any medical conditions or concerns[5].

    Citations:

    [1] https://www.cdc.gov/physicalactivity/basics/age-chart.html

    [2] https://www.cdc.gov/physicalactivity/basics/adults/index.htm

    [3] https://www.who.int/news-room/fact-sheets/detail/physical-activity

    [4] https://www.health.gov.au/topics/physical-activity-and-exercise/physical-activity-and-exercise-guidelines-for-all-australians/for-adults-18-to-64-years

    [5] https://www.nhs.uk/live-well/exercise/exercise-guidelines/physical-activity-guidelines-for-adults-aged-19-to-64/

    [6] https://www.acsm.org/education-resources/trending-topics-resources/physical-activity-guidelines

    [7] https://www.ncbi.nlm.nih.gov/books/NBK305058/

    [8] https://www.ncbi.nlm.nih.gov/books/NBK566046/

    [9] https://health.gov/sites/default/files/2019-09/Physical_Activity_Guidelines_2nd_edition.pdf

    2
  • Price of insulin in China, Cuba, Laos, North Korea, and Vietnam?

    Information on the price of insulin in China, Cuba, Laos, North Korea, and Vietnam is not readily available in the search results. However, there is information on the cost of insulin in some other countries. According to the World Population Review, the cost of insulin varies widely by country, with the United States having the highest prices[1]. In 2018, RAND Corporation reported that the manufacturer price for any given type of insulin averaged five to ten times higher in the United States than in all other OECD countries[1]. Novo Nordisk, a major insulin manufacturer, has recently slashed the price of insulin in the United States[3].

    A study published in PMC in 2019 examined the prices, availability, and affordability of insulin in 13 low-income and middle-income countries. The study found that median procurement prices of human insulins across the countries ranged from $4.56 to $5.30, with large price variations seen across the countries[4]. The highest prices were found in two Chinese provinces, with isophane human insulin costing $24.72 in Shaanxi, China[4].

    There is also information on the cost of insulin in South Korea, which stands at an average of $10.30 per unit[2]. According to Volza's Global Import data, insulin syringe import shipments from South Korea stood at 1.6K, imported by 98 World Importers from 82 South Korea[5]. China is one of the top exporters of insulin syringes, with 2,155 shipments[6].

    Citations:

    [1] https://worldpopulationreview.com/country-rankings/cost-of-insulin-by-country

    [2] https://wisevoter.com/country-rankings/cost-of-insulin-by-country/

    [3] https://www.fiercepharma.com/pharma/novo-nordisk-follows-eli-lillys-lead-slashing-insulin-prices-us

    [4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6570978/

    [5] https://www.volza.com/imports-global/global-import-data-of-insulin+syringe-from-south-korea

    [6] https://www.volza.com/p/syringe-insulin/export/export-from-china/

    0
  • Find fast Lemmy instances

    To improve the given script using asyncio, you can use the aiohttp library for making asynchronous HTTP requests. Here's the modified version of the script:

    ```python #!/usr/bin/env python3 import json import asyncio import aiohttp import time

    from typing import List, Dict

    TIME_BETWEEN_REQUESTS = 5 TIME_TOTAL = 60

    async def get_latency(session, domain): try: start = time.time() if not domain.startswith(("http://", "https://")): domain = "https://" + domain async with session.get(domain, timeout=3) as response: end = time.time() return end - start except asyncio.TimeoutError: return float("inf") except aiohttp.client_exceptions.ServerDisconnectedError: return float("inf")

    def add_latency_to_domain(domain, latency, latencies): if domain not in latencies: latencies[domain] = [] latencies[domain].append(latency) return latencies

    async def measure_latencies_for_domains(session, domains, latencies): tasks = [] for domain in domains: tasks.append(get_latency(session, domain))

    results = await asyncio.gather(*tasks)

    for domain, latency in zip(domains, results): latencies = add_latency_to_domain(domain, latency, latencies)

    return latencies

    async def measure_latencies(domains, duration): latencies = {} start_time = time.time() end_time = start_time + duration

    async with aiohttp.ClientSession() as session: while time.time() < end_time: latencies = await measure_latencies_for_domains(session, domains, latencies) await asyncio.sleep(TIME_BETWEEN_REQUESTS)

    return latencies

    def average_latencies(latencies): averages = [] for domain, latency_list in latencies.items(): avg_latency = sum(latency_list) / len(latency_list) averages.append((domain, avg_latency)) return averages

    def sort_latencies(averages): return sorted(averages, key=lambda x: x[1])

    async def get_latency_report(domains, duration): latencies = await measure_latencies(domains, duration) averages = average_latencies(latencies) return sort_latencies(averages)

    def get_instances(data: Dict) -> List[Dict]: instances = [] for instance_details in data["instance_details"]: instances.append(instance_details) return instances

    def get_domains(instances: List[Dict]) -> List[str]: return [instance["domain"] for instance in instances]

    def load_json_data(filepath: str) -> Dict: with open(filepath) as json_data: return json.load(json_data)

    async def main(): data = load_json_data('stats.json') instances = get_instances(data) domains = get_domains(instances) report = await get_latency_report(domains, TIME_TOTAL) for domain, avg_latency in report: print(f"{domain}: {avg_latency:.2f} seconds")

    if name == "main": asyncio.run(main()) ```

    This version of the script should be able to ping multiple instances simultaneously using asyncio.

    0
  • Enhancing the Lemmy Experience: Weighted Community Preferences for a Better Subscribed Feed

    As a frequent user of Lemmy, I've noticed an issue that affects my browsing experience, particularly in my Subscribed feed. Currently, the platform does not allow users to view more content from specific communities they prefer over others. This limitation forces users to unsubscribe from larger communities to see more content from their favorite ones[1].

    To improve the user experience, I propose the implementation of weighted community preferences. This feature would enable users to assign a weight to each community they are subscribed to, allowing them to see more content from their preferred communities in their Subscribed feed. By doing so, users can enjoy a more personalized browsing experience without having to unsubscribe from larger communities.

    This feature would address the issue of larger communities dominating the Subscribed feed, which has been reported by other users as well[1]. By giving users the ability to control the prominence of specific communities in their feed, Lemmy can foster a more engaging and enjoyable experience for its users.

    In addition to weighted community preferences, it would be beneficial to explore other ways to improve the Subscribed feed, such as:

    • Implementing better sorting algorithms that take into account the user's browsing habits and preferences[1].
    • Allowing users to group communities from different instances and aggregate them into a folder, as suggested in a previous discussion[2].

    By addressing these issues and implementing features like weighted community preferences, Lemmy can continue to grow and provide a more satisfying experience for its users. I encourage the Lemmy development team and the community to consider these suggestions and work together to enhance the platform for everyone.

    Let's make Lemmy an even better place for our diverse interests and communities!

    Citations: [1]: Lemmy Post: Issue with Subscribed feed [2]: Lemmy Comment: Grouping communities from different instances

    0
  • ELI5 Dead Internet Theory

    The Dead Internet theory is an online conspiracy theory that suggests that the internet is now primarily composed of bot activity and algorithmically generated content, which marginalizes organic human activity[0]. It is believed that intelligent bots are being used to manipulate algorithms and boost search results, ultimately influencing consumers. This theory emerged in the late 2010s or early 2020s and gained traction among technology enthusiasts, researchers, and futurists who were concerned about the increasing complexity and potential risks associated with our reliance on the internet[0].

    The origins of the Dead Internet theory are difficult to pinpoint, but some attribute its emergence to a thread titled "Dead Internet Theory: Most Of The Internet Is Fake" on the forum Agora Road’s Macintosh Cafe in 2021[0]. The theory gained more attention through coverage on various YouTube channels and an article in The Atlantic titled "Maybe You Missed It, but the Internet ‘Died’ Five Years Ago"[2]. The article explores the theory in depth, discussing how the internet is perceived as being "empty and devoid of people" and how AI-generated content and bot activity have contributed to this perception[2].

    ChatGPT, an AI chatbot released in 2022, brought new attention to the Dead Internet theory. It raised concerns that the internet would become overwhelmed with content created by individuals using AI, potentially overshadowing organic human content[0].

    While the Dead Internet theory is considered a conspiracy theory, it has sparked discussions about the internet's changing landscape and the potential impact of AI and algorithmic manipulation. It raises questions about the authenticity of online content and the influence of bots and algorithms on public perception[7].

    In terms of addressing the Dead Internet theory, it's important to approach it with critical thinking and skepticism. The theory is based on speculation and lacks concrete evidence. It is crucial to rely on reputable sources and research to evaluate the validity of such claims. It's also important to keep in mind that advancements in AI and algorithms have both positive and negative implications, and understanding their potential impact is an ongoing area of study[7].

    To summarize:

    • The Dead Internet theory is an online conspiracy theory suggesting that the internet is primarily comprised of bot activity and algorithmically generated content, marginalizing organic human activity[0].
    • The theory emerged in the late 2010s or early 2020s and gained attention through online forums, YouTube channels, and articles.
    • ChatGPT, an AI chatbot released in 2022, brought new attention to the theory.
    • It's important to approach the theory with skepticism and rely on reputable sources and research to evaluate its validity.
    • The impact of AI and algorithms on the internet is an ongoing area of study, and it's important to consider both the positive and negative implications of these technologies[7].

    Citations:

    [0]: Wikipedia: Dead Internet theory [2]: The Atlantic: Maybe You Missed It, but the Internet ‘Died’ Five Years Ago [7]: The Daily Free Press: The Dead Internet Theory

    0
  • Trust Levels: The Key to Combating AGI Spam and Maintaining Healthy Online Communities

    The rapid advancement of artificial intelligence and the increasing number of users in online communities necessitate the implementation of trust levels, in addition to traditional roles like normal users, moderators, and admins[1]. Trust levels not only help manage the growing user base but also provide a crucial defense against intelligent bots that can bypass registration obstacles, such as CAPTCHAs[2]. These bots have the potential to flood platforms like Lemmy with spam, disrupting the user experience and undermining the community's integrity[3].

    Trust levels offer a way to sandbox new users, preventing them from accidentally causing harm while they learn the ropes, and gradually granting experienced users more rights to help maintain and moderate the community[1]. This system is particularly important in the face of intelligent bots and the increasing use of Language Learning Models (LLMs) to generate spam content, as seen in recent Reddit protests[4][5]. By incorporating trust levels, online communities can better protect themselves from potential threats and maintain a safe, engaging environment for genuine users[6][7].

    Moreover, trust levels can help address the issue of too many users for the number of moderators, as they empower experienced users to take on more responsibilities in maintaining the community[1]. This approach fosters a sense of ownership and shared responsibility among community members, leading to stronger, more sustainable communities[6][8]. In conclusion, trust levels are a vital solution to the challenges posed by intelligent bots, LLM-generated spam, and the growing number of users in online communities.

    Citations: [1]: Understanding Discourse Trust Levels [2]: How Attackers Use Request Bots to Bypass Your Bot Mitigation Solution [3]: Lemmy Issue: Spam Flood [4]: Protest Against Reddit API Changes [5]: Generative AI and Trust [6]: The Science of Building Trust in Social Media Communities [7]: Establishing Trust and Safety in Online Communities [8]: Building Trust in Communities

    0
  • Flatpack vs AUR comparison

    Flatpak and AUR are two different methods for distributing and installing software on Linux systems. Here is a detailed comparison of their advantages and disadvantages:

    Flatpak

    Advantages:

    1. Cross-distribution compatibility: Flatpak packages can be installed on any Linux distribution that supports Flatpak, making it easier for developers to distribute their applications[1].
    2. Sandboxing and security: Flatpak applications are isolated from the host system, which enhances security and reduces the risk of system breakage[2].
    3. Delta updates: Flatpak allows updating only the modified parts of an application, resulting in faster updates and reduced bandwidth usage[3].
    4. No administrator privileges required: Flatpak applications can be installed without administrator privileges[3].

    Disadvantages:

    1. Larger package sizes: Flatpak packages include all necessary dependencies, which can result in larger package sizes compared to native packages[2].
    2. Slower launch times: Flatpak applications may have slightly slower launch times compared to native applications[2].
    3. Limited to desktop applications: Flatpak currently does not support server applications[4].

    AUR (Arch User Repository)

    Advantages:

    1. Transparency: AUR is a community-driven repository, and its packages are generally clear and transparent[5].
    2. Smaller package sizes: AUR packages typically have smaller sizes compared to Flatpak packages[6].
    3. Higher number of packages: AUR has a vast number of packages available, often more than other repositories[2].

    Disadvantages:

    1. Distribution-specific: AUR packages are designed specifically for Arch-based distributions, which may limit their compatibility with other Linux distributions[1].
    2. Potential security risks: AUR packages are maintained by volunteers, and their quality and security may vary. Users should exercise caution and verify the trustworthiness of packages before installation[7].

    In summary, Flatpak offers better cross-distribution compatibility, sandboxing, and security, while AUR provides transparency, smaller package sizes, and a larger number of packages. The choice between Flatpak and AUR depends on your specific needs and preferences. If you prioritize cross-distribution compatibility and security, Flatpak might be a better choice. On the other hand, if you prefer smaller package sizes and a larger selection of packages, AUR could be more suitable.

    Citations:

    [1]: Reddit - Pros and Cons of Flatpaks [2]: Linux Addicts - Comparing Snap and Flatpak [3]: It's FOSS - What is Flatpak? [4]: It's FOSS - What is Flatpak? [5]: EndeavourOS Forum - Current Best Practice: Flatpak or AUR? [6]: Reddit - AUR vs Snap vs Flatpak [7]: Manjaro Forum - AUR or Snap App Version to Minimize Risk of Breakage

    0
  • Why use Tixati over qBittorrent?

    There are several reasons why someone might choose to use Tixati over qBittorrent, although qBittorrent is generally recommended for most people[1]. Here are some advantages of Tixati:

    1. No spyware, ads, or gimmicks: Tixati is known for being free of spyware, ads, and gimmicks, which can be appealing to users who prioritize privacy and a clean user experience[2].

    2. Detailed swarm views: Tixati offers detailed views of all aspects of the swarm, including peers, pieces, files, and trackers. This can be useful for users who want more control and information about their torrent downloads[2].

    3. Lightweight and fast performance: Tixati is known for its lightweight design and fast performance, which can be beneficial for users with limited system resources or those who want a more efficient torrent client[3].

    4. Individual event logging for all peers: Tixati provides individual event logging for all peers, which can be helpful for users who want to monitor the activity of their torrent downloads more closely[3].

    5. Support for magnet links: Tixati supports magnet links, eliminating the need to download .torrent files[2].

    However, it's important to note that qBittorrent also offers many of these features and is generally considered a more popular and well-rounded torrent client[1][4]. Some advantages of qBittorrent include:

    1. Open-source and ad-free: qBittorrent is an open-source, ad-free torrent client, which allows users to verify that the software is secure and respects privacy[1].

    2. Built-in search engine: qBittorrent has a built-in search engine, making it easy to find and download torrents directly from the client[5].

    3. Remote torrent management: qBittorrent offers remote torrent management, allowing users to control their torrent downloads from a different device[6].

    4. Cross-platform compatibility: qBittorrent is available on all major desktop platforms, including Windows, macOS, and Linux[6].

    Ultimately, the choice between Tixati and qBittorrent will depend on your personal preferences and needs. While Tixati has some unique features, qBittorrent is generally considered a more popular and well-rounded option[1][4].

    Citations:

    [1] https://www.slant.co/versus/3724/3726/~qbittorrent_vs_tixati

    [2] https://www.tixati.com/discover/

    [3] https://www.techradar.com/reviews/tixati

    [4] https://www.saashub.com/compare-qbittorrent-vs-tixati

    [5] https://moviemaker.minitool.com/moviemaker/qbittorrent-vs-utorrent.html

    [6] https://www.cloudwards.net/qbittorrent-review/

    0
  • Balancing Scores on Lemmy: Exploring Options for Fairness and Relevance

    Hello fellow Lemmy users and enthusiasts!

    Today, we want to dive into the topic of balancing scores on Lemmy and discuss some of the different options that have been proposed. We'll cover the suggestions mentioned in the official GitHub repository[^1026], as well as some additional ideas that could contribute to a fair and relevant scoring system.

    1. Affinity to Admin: One of the proposed options is to increase the weight of votes based on the user's affinity to the admin[^1026]. This means that the content of the instance would be most relevant to the admin, incentivizing self-hosting Lemmy. This approach aims to prioritize the preferences of the admin, potentially resulting in a more tailored and focused community for that particular instance.

    2. Score Posts based on Community Size: Another suggestion put forward in the GitHub repository is to score posts based on the community size at the time of voting[^2794]. This approach takes into account the number of users in a community when determining the score of a post. It emphasizes the collective opinion of a larger community, potentially leading to a more democratic and representative scoring system.

    3. Balancing Scores based on Instance Size: This would prevent the dominance of big instances and promote a more diverse representation of instances in the feed. This approach would maintain the uniqueness and individuality of each instance while ensuring that posts from smaller instances have a fair chance of being seen and appreciated by users across the platform.

    4. Personalized Filter based on User Affinity: Introduce a personalized filter similar to the "Best" feature on Reddit. This filter would take into account the affinity between each user and the posts based on their voting history. By keeping a score of the upvotes and downvotes given by a user[^2370], Lemmy could analyze the user's preferences and provide a more customized feed that aligns with their interests. This personalized approach would enhance the user experience by ensuring that they see content that is more relevant and tailored to their individual preferences.

    5. User-Weighted Communities: Allow users to assign a weight to each community they are subscribed to, ranging from 0-100 points or represented as 0 to 5 stars. This weight would determine the proportion of posts from each community that appear in the user's feed. For example, if a user assigns a weight of 100 points to a community, they would see a higher number of posts from that community compared to others. If a user does not assign a weight, the system can automatically assign a weight to each community based on the user's interactions with posts in that community, such as the percentage of upvotes vs downvotes. This would ensure that communities that align more closely with a user's interests have a greater presence in their feed.

    6. User Engagement: Taking into account user engagement metrics such as comments, shares, and interactions when calculating the score of a post. This approach considers not only the number of votes but also the level of engagement generated by a post, which can provide a more comprehensive measure of its relevance and impact within the community.

    7. Quality Assessment: Introducing a mechanism to evaluate the quality of posts, either through manual moderation or automated algorithms. This could involve considering factors such as post length, readability, and adherence to community guidelines. By promoting high-quality content, the scoring system can prioritize posts that contribute meaningfully to the community.

    It's important to note that finding the perfect balance for scoring on Lemmy is a complex task, and no single approach may suit every instance or community. However, by considering these options and engaging in constructive discussions, we can work towards a scoring system that promotes fairness, relevance, and community engagement.

    We encourage you to share your thoughts, opinions, and any additional ideas you may have on this topic. Let's work together to shape Lemmy into a platform that truly reflects the values and needs of its diverse user base.

    Thank you for being a part of the Lemmy community!

    Sources:

    [^3241]: Voting Affinity and Engagement Analysis

    [^1026]: The rank of a post in the aggregated feed should be inversely proportional to the size of the community #1026

    [^2794]: Score posts based on community size at the time of voting #2794

    [^2370]: Keep a score of the upvotes and downvotes given to user. #2370

    0
  • Write a script that reposts all my youtube subscription videos with above average popularity on lemmy using Python

    To repost all your YouTube subscription videos with above-average popularity on Lemmy using Python, you'll need to follow these steps:

    1. Get a YouTube API key[1].
    2. Use the YouTube API to fetch your subscription videos[2].
    3. Determine the popularity threshold (e.g., average views, likes, or comments).
    4. Filter the videos based on the popularity threshold.
    5. Use Pythorhead to interact with Lemmy and post the filtered videos[3].

    Here's a sample Python script to achieve this:

    ```python import requests from pythorhead import Lemmy

    Replace with your YouTube API key and Lemmy credentials

    YOUTUBE_API_KEY = 'your_youtube_api_key' LEMMY_USERNAME = 'your_lemmy_username' LEMMY_PASSWORD = 'your_lemmy_password'

    Fetch your YouTube subscription videos

    def get_youtube_subscriptions(api_key): # Replace with your YouTube channel ID channel_id = 'your_youtube_channel_id' url = f'https://www.googleapis.com/youtube/v3/subscriptions?part=snippet&channelId={channel_id}&maxResults=50&key={api_key}' response = requests.get(url) data = response.json() return data['items']

    Determine the popularity threshold

    def get_popularity_threshold(videos): # Calculate the average views, likes, or comments of the videos # Replace this with your preferred popularity metric pass

    Filter videos based on the popularity threshold

    def filter_videos(videos, threshold): # Filter the videos based on the popularity threshold # Replace this with your preferred popularity metric pass

    Post filtered videos on Lemmy using Pythorhead

    def post_videos_on_lemmy(videos): lemmy = Lemmy("https://lemmy.dbzer0.com") lemmy.log_in(LEMMY_USERNAME, LEMMY_PASSWORD) community_id = lemmy.discover_community("your_lemmy_community")

    for video in videos: title = video['snippet']['title'] url = f'https://www.youtube.com/watch?v={video["id"]}' lemmy.post.create(community_id, title, url)

    Main script

    if name == 'main': videos = get_youtube_subscriptions(YOUTUBE_API_KEY) threshold = get_popularity_threshold(videos) filtered_videos = filter_videos(videos, threshold) post_videos_on_lemmy(filtered_videos) ```

    Replace the placeholders with your YouTube API key, Lemmy credentials, and YouTube channel ID. You'll also need to implement the get_popularity_threshold and filter_videos functions based on your preferred popularity metric (e.g., views, likes, or comments).

    Please note that this script is just a starting point, and you might need to modify it according to your specific requirements.

    Citations:

    [1] https://blog.hubspot.com/website/how-to-get-youtube-api-key

    [2] https://gist.github.com/Yiannis128/4a9c016236edf41493176a59bb0a1be0

    [3] https://github.com/db0/pythorhead

    1
  • When we go swimming and spend extended time in water, do the little bugs who live on our skin suffocate and die?

    When we go swimming and spend extended time in water, it is not clear whether the little bugs living on our skin, such as Demodex mites, suffocate and die. Demodex mites live inside hair follicles and pores, feeding on sebum, the oily substance our skin produces[1]. They are difficult to remove by washing, as they reside deep within the pores[1]. While some insects can survive underwater for short durations[2], there is no specific information available on whether Demodex mites can survive extended periods underwater.

    However, it is important to note that spending time in chlorinated water, such as in swimming pools, may have some effect on the mites. Chlorine is a disinfectant that can kill various microorganisms, but there is no direct evidence to confirm its effectiveness against Demodex mites specifically.

    In conclusion, it is uncertain whether spending extended time in water, especially chlorinated water, will cause the little bugs living on our skin to suffocate and die. However, maintaining good hygiene and washing your face regularly can help manage the population of these mites on your skin[3].

    Citations:

    [1] https://www.npr.org/sections/health-shots/2019/05/21/725087824/meet-the-mites-that-live-on-your-face

    [2] https://crops.extension.iastate.edu/blog/erin-hodgson/can-insects-breathe-water

    [3] https://www.byrdie.com/how-to-get-rid-of-face-mites-5091865

    0
  • How to use Anime4K with mpv on Manjaro

    To use Anime4K with mpv on Manjaro, you need to do the following steps:

    1. Install Anime4K: Anime4K is available on the AUR. You can install it using any AUR helper like yay or pamac. Here's how you can do it with yay: bash yay -S anime4k-git Source 0

    2. Install mpv: You can install mpv from the official Manjaro repositories using pacman: bash sudo pacman -S mpv Source 3

    3. Download the Anime4K shaders: Depending on your GPU, you can download the optimized shaders for lower-end GPU or higher-end GPU from the Anime4K Github page. Extract the shaders and place them in the shaders folder. Source 3

    4. Configure mpv to use Anime4K: You need to modify your mpv.conf file to use Anime4K. The mpv.conf file is usually located in ~/.config/mpv/mpv.conf. If it doesn't exist, you can create it. Below is a sample configuration: bash profile=gpu-hq scale=ewa_lanczossharp cscale=ewa_lanczossharp dscale=mitchell tscale=oversample correct-downscaling=yes sigmoid-upscaling=yes scaler-resizes-only=yes deband=yes Add the path to the Anime4K shaders with the glsl-shaders-append option. For example: bash glsl-shaders-append="~~/shaders/Anime4K_Upscale_CNN_M_x2_Denoise_Bilateral_Mode.glsl" Replace "~~/shaders/Anime4K_Upscale_CNN_M_x2_Denoise_Bilateral_Mode.glsl" with the path to your shaders. Source 6

    5. Use the correct mode for your content: Anime4K has 3 major modes: A, B, and C. Each mode is optimized for a different class of anime degradations. You can switch between these modes using CTRL+1 for Mode A (Optimized for 1080p Anime), CTRL+2 for Mode B (Optimized for 720p Anime), and CTRL+3 for Mode C (Optimized for 480p Anime). You can disable Anime4K using CTRL+0. Source 3

    Remember that Anime4K is quite GPU-intensive, so ensure your hardware is capable of handling it. If you experience issues, try adjusting your mpv settings or use a lower-end GPU version of Anime4K.

    0
  • Here's how to set up a Lemmy instance behind Tor

    Setting up a Lemmy instance behind Tor requires a few steps, including setting up a Lemmy instance, configuring it to work with Tor, and creating a Tor hidden service. Here's a step-by-step guide to help you achieve this:

    1. Set up a Lemmy instance: First, you need to set up a Lemmy instance on your server. You can follow the instructions provided in the Lemmy documentation to install Lemmy from scratch or use the official Docker Compose method.

    2. Install Tor: Follow the official Tor documentation to install Tor on your server.

    3. Configure Lemmy to work with Tor: According to the Lemmy documentation on running a Tor hidden service, Lemmy can be proxied through Tor, but federation tasks are still handled by HTTPS on the open internet. This means that your Lemmy instance will be accessible through the Tor network, but it will still federate with other instances using the clearnet.

    4. Create a Tor hidden service: Follow the Tor documentation on setting up an Onion Service to create a Tor hidden service for your Lemmy instance. This will allow users to access your Lemmy instance using a .onion address through the Tor network.

    5. Configure your web server: You may need to configure your web server (e.g., Nginx or Apache) to work with the Tor hidden service. The Tor documentation provides examples of how to set up an onionsite with Nginx and Apache on Debian.

    Once you have completed these steps, your Lemmy instance should be accessible through the Tor network using a .onion address. Keep in mind that federation between instances will still occur over the clearnet, as mentioned in the Lemmy documentation[1].

    Citations:

    [1] https://join-lemmy.org/docs/administration/tor_hidden_service.html

    0
  • Estimated Market Worth of Cultured Meat vs Traditional Meat

    What's the market worth of cultured vs normal meat

    0
  • 28/06/23

    0
1 Active user