This may be a long shot, but it's what I do, so it might be an option: Set up a crypto gateway like CipherMail which will automatically decrypt inbound email and sign/encrypt outbound. The result is that your Thunderbird will never get to see an encrypted email, decryption is handled transparently before it hit's your inbox. Obviously, if you don't trust your email provider, this is not an option.
This isn't simple and hence not for everyone, also comes with dependencies on your email provider, but it works flawless for me ever since I set it up. I run my own email server, hence adding in CipherMail wasn't a big deal.
Very helpful, thank you. I will absolutely watch these videos. And I am really glad that I seem to have found a forum where I can get some good input whenever I am stuck on something. It's been painful in the past :-) I know I am lacking the basics but still managed to get an app off the ground which appears to be useful for quite a few users globally. B
Ha, thank you. I didn't even realize that there is such granularity in dispatchers. Changed accordingly š I assume the IO dispatcher is somehow more efficient when it comes to IO tasks?
Would you care to elaborate about the lifecycle scope? I somehow don't seem to be able to add the dependency and am not sure how this is going to improve things? Is this about making sure that the coroutine does or doesn't get canceled in case the user quits the activity before the import is complete?
// LifecycleCoroutineScope(Dispatchers.IO).launch {
// LifecycleScope(Dispatchers.IO).launch {
CoroutineScope(Dispatchers.IO).launch {
importLogic()
}
I do agree, just couldn't figure out how to do it properly. Opening the ZIP and all subsequent actions are now outside of the composable import(). But I realized the UI didn't get updated until the "outside" function completed, so I ended up pushing the business logic to a coroutine:
Like this:
setContent {
ImportUI()
}
CoroutineScope(Dispatchers.Default).launch {
importLogic()
}
You would expose the port to your host which makes the db acessible by anything running on the host, docker or native. Something like
`port
- 5432:5432 `
But I would recommend running a dedicated db for each service. At least that's what I do.
- Simpler setup and therefore less error-prone
- More secure because the db's don't need to be exposed
- Easier to manage because I can independently upgrade, backup, move
Isn't the point about containers that you keep things which depend on each other together, eliminating dependencies? A single db would be a unecessary dependency in my view. What if one service requires a new version of MySQL, and another one does not yet support the new version?
I also run all my databases via a bind mount
`volume
- ./data:/etc/postgres/data...`
and each service in it's own directory. E.g. /opt/docker/nextcloud
That way I have everything which makes up a service contained in one folder. Easy to backup/restore, easy to move, and not the least, clean.
Lacking something fundamental with Compose
I am trying to convert a view based screen to Compose and while what I need should be very basic, somehow I can't get this to work. The use case at hand is a serial task where one step follows the other and the UI should reflect progress. But I seem to miss something fundamental because none of my Text() will update. Below is a simplified example of what I got:
``` override fun onCreate(savedInstanceState: Bundle?) { ā¦
setContent { Import() } }
@Composable fun Import() { var step1 by remember { mutableStateOf("") } var step2 by remember { mutableStateOf("") }
Column() { Text(text = step1) Text(text = step2) } }
step1 = "Open ZIP file" val zipIn: ZipInputStream = openZIPFile() step1 = "ā $step1"
step2 = "Extract files" val count = extractFiles() step2 = "ā $step2" ā¦ }
```
If I set the initial text in the remember line, like this
var step1 by remember { mutableStateOf("Open ZIP file") }
the text will show, but also never gets updated.
I also tried to move the logic part into a separate function which gets executed right after setContent() but then the step1/step2 aren't available for me to update.
#######
Edit:
Well, as expected this turned out to be really easy. I have to break this one
var step1 by remember { mutableStateOf("Open ZIP file") }
into 2 statements:
var step1String = mutableStateOf("Open ZIP file")
With step1String as a class wide variable so I can change it from other functions.
In the Import() composable function al I need is this:
var step1 by remember { step1String }
Have to say Compose is growing on meā¦ :-)
Enhancement proposal: How about a subtle Divider() between days?
Made a little mock-up to show what I mean. May not be for everyone but in my case it would improve readability. Maybe as on option?
For me it's Borg backup for Nextcloud an all the other servers
Turns out there are no more http websites out there, at least I couldn't find any. Good for our security, bad for my demo :-)
I herewith grant you the official title of a real hacker :-)
That's the tool I was looking for, thanks a lot!
Wireshark is my backup plan for a text based demo. Unless I am missing something pictures in Wireshark will just be binary stuff across multiple packets which won't work for a demo. The tool I am looking for managed to identify which packets contains image data and showed them in a grid
Tool to sniff a network and show the pictures which get transmitted
I am preparing for a little presentation on Wifi security / dangers of public networks to a non technical audience and wanted to do a demonstration to make things a little more visual. My idea is to use a tool I had years ago which would look for network packets containing image data and would compile them as they get transmitted. And with this tool active I would ask someone in the audience to navigate to a non https website and then see the images on my PC/projector. It's a small crowd of elderly people, so the risk of catching something inappropriate should be small. :-)
But, I can't remember what the tool was called and my search attempts didn't reveal anything. Anyone got an idea?
Or maybe an idea for an even better demo?
I did indeed, and I have to say I am impressed from what I see so far! A really nice and complete tool you created. Thanks a lot for putting in the hours š
Docker-compose + Traefik
Has anyone got a working setup of this combination? I somehow can't get things to work
I can run the below on the docker host sucessfully:
curl -d "Backup successful š" localhost:81/test {"id":"4EpidFddbe8p","time":1688997266,"expires":1689040466,"event":"message","topic":"test","message":"Backup successful š"}
ā¦but when I try the public url from a different machine I get a 404 page not found. Which to me means ntfy is running, but there's something wrong with my Traefik setup.
docker-compose.yml
```
ā¦
ports:
- 81:80
labels:
- "traefik.enable=true"
- "traefik.http.routers.ntfy.rule=Host(ntfy.mydomain.com
)"
- "traefik.http.routers.ntfy.tls=true"
- "traefik.http.routers.ntfy.entrypoints=http"
- "traefik.http.routers.ntfy.entrypoints=https"
- "traefik.http.routers.ntfy.tls.certresolver=http"
- "traefik.http.services.ntfy.loadBalancer.server.port=81"
- "traefik.docker.network=traefik-proxy"
- "traefik.http.routers.ntfy.service=ntfy"
```
Minimalistic server.yml: ``` cat config/server.yml
ntfy server config file
base-url: "https://ntfy.mydomain.com" #upstream-base-url: "https://ntfy.sh" #listen-http: "127.0.0.1:80" cache-file: "/var/cache/ntfy/cache.db" #attachment-cache-dir: "/var/cache/ntfy/attachments" behind-proxy: true ```
Can anyone spot a mistake here or suggest additional troubleshooting steps?
---- Edit: Never mind, Traefik has given me so much grief, my brain doesn't seem to be compatible :-), that I decided to switch to nginx. Got everything running after 5 minutesā¦
For a while it's just data in, which it handles really well. But it really started to shine for me when I needed to find some of the documents. OCR and their search works very well for me.
There are also some interesting thoughts in here: https://skerritt.blog/how-i-store-physical-documents/
What's the error?
Can you run docker-compose logs
Here's mine: https://cloud.zell-mbc.com/s/Ac5KQTTxcWNYbNs
I tried to add file it to this post but formatting got completely messed up, hence a link.
Before you run docker-compose you need to change the paperless-app volumes to fit your requirements and set up the variables in .env
Device is a HP Pro 9010 Printer/Scanner with a local SMB folder set up as scan target. Paperless monitors the share and picks up everything someone (I) put in there. Scanner, PC, phone, anything which can connect to the SMB share. Dead easy and works reliably.
Let me throw in Paperless NGX, https://github.com/paperless-ngx/paperless-ngx
I run mine on Docker as well, but I noticed that the documentation has got some issues, at least if you are frontending with nginx like I am
:-)
But seriously, I was wondering about the requirement to shutdown the VM's and couldn't come up with a solid reason? I mean, even if QEMU/KVM/Kernel get replaced during a version upgrade or a more common update, all of these kick in only after the reboot? And how's me shutting down VMs manually different from the OS shutting down during a reboot?
I know I am speculating and may not have the fill picture, probably a question for the Proxmox team, there may be some corner case where this is indeed important.
By the way, Mexican or US black strat? :-)
I have been looking into Mastodon a while back but found it way too complex for my single user use case. I ended up with Akkoma running on Docker which seems to be a much better fit for this requirement. I also set up Lemmy on Docker a week ago or so which seems to run fine as well. I noticed the comment here that the Lemmy documentation for Docker is incomplete, which I noticed as well. But I figured it out, so if you hit a road block I may be able to help.
Like you I have OPNsense in a VM on one of my PVEs. But I only made sure the nigthly VM back up ran and didnt even bother shutting down the VMs during the upgrade. The VMs got restarted during the final reboot, as the would with every other reboot, and I was back in business.
Well, depends on what your client OS is I guess. There's a cli tool called proxmox-backup-client
which allows you to back up OS level, I use this to back up the host OS of my PVE server.
I am not sure if this tool is available for Windows/Mac though.
I have 2 PVE servers and on one I have installed PBS in parallel, which works fine. Just a different port to get to the UI: PVE: https://pve2:8006 PBS: https://pve2:8007
You didn't say what you are using for your scheduled backups. If it's something like Borg backup you got a similar level of functionality, CLI instead of a nice UI though.
I have been using Borg for years and recently also installed PBS. What I do like about it is that the UI is similar to PVE and that it nicely integrates the backup prcess into the UI which makes handling easier and in the end less error prone when it comes to restores I guess. From where I stand right now I will likely keep PBS for things which run on PVE, Borg for the rest of the world.
Proxmox Backup Server 3.0 available
Proxmox Backup Server 3.0 available
It's based on Debian 12 "Bookworm", but uses the newer Linux kernel 6.2, and includes ZFS 2.1.12.
- Debian 12, with a newer Linux kernel 6.2
- ZFS 2.1.12
- Additional text-based user interface (TUI) for the installer ISO
- Many improvements for tape handling
- Sync jobs: ātransfer-lastā parameter for more flexibility
Release notes https://pbs.proxmox.com/wiki/index.php/Roadmap
Press release https://www.proxmox.com/en/news/press-releases/
Proxmox Backup Server 3.0 available
Proxmox Backup Server 3.0 available
It's based on Debian 12 "Bookworm", but uses the newer Linux kernel 6.2, and includes ZFS 2.1.12.
- Debian 12, with a newer Linux kernel 6.2
- ZFS 2.1.12
- Additional text-based user interface (TUI) for the installer ISO
- Many improvements for tape handling
- Sync jobs: ātransfer-lastā parameter for more flexibility
Release notes https://pbs.proxmox.com/wiki/index.php/Roadmap
Press release https://www.proxmox.com/en/news/press-releases/