

Unless you’re running VLANs, in which case the inter VLAN is normally handled by the router. I also expose my home lab services over BGP so all my traffic hits the router then comes back to my lab services.


Unless you’re running VLANs, in which case the inter VLAN is normally handled by the router. I also expose my home lab services over BGP so all my traffic hits the router then comes back to my lab services.


https://forum.syncthing.net/t/discontinuing-syncthing-android/23002
According to this post, it was partly that and lack of maintainers. Given there’s maintainers for a fork, I’m curious why they didn’t bring them into the main project.
Reason is a combination of Google making Play publishing something between hard and impossible and no active maintenance. The app saw no significant development for a long time and without Play releases I do no longer see enough benefit and/or have enough motivation to keep up the ongoing maintenance an app requires even without doing much, if any, changes.


We’re sort of in this situation because the official project decided not to continue providing an official Android app, yet people want to use it on Android forcing unofficial versions to be created and maintained.
I get that they don’t want to deal with Google Play anymore, but somebody has to deal with it and them not owning the app is putting users at risk.


How would that work? The use case is for previews for pull requests. Somebody submits a change to the website. This creates a preview domain that reviewers and authors can see their proposed changes in a clean environment.
CloudFlare pages gives this behavior out of the box.


It is for pull requests. A user makes a change to the documentation, they want to be able to see the changes on a web page.
If you don’t have them on the open web, developers and pull request authors can’t see the previews.
The issue they had was being marked as phishing, not the SSL certificate warning page.


Small correction. He was impeached by the House. The Senate then decides whether to convict, not whether to impeach.
A newer release, v0.6.30 is already released to fix an issue with OneDrive integration.
Looks like they finally finally made their slim image tag smaller than the main image:
ghcr.io/open-webui/open-webui:v0.6.30-slim 7c61b17433e8 46 hours ago 4.3GB
ghcr.io/open-webui/open-webui:v0.6.30 c1ac444c0471 46 hours ago 4.82GB
Though only saving .5GB of space is not very slim. I use OpenWebUI in my home lab, but this issue just made me question the quality of the project a tiny bit.


I’ve been running my own mail for 10+ years. I recommend rspamd for spam filtering. It took the place of SpamAssasin, grey listing, SPF checking, etc. All in one single system.


How do you expect the packets to actually route? If you run Tailscale and your VPN on your phone, they might fight with each other for control of the routing table.
If you’re trying to use Tailscale exit note to then route through Tailscale to one node running gluetun to Mullvad. That’s going to be complex because against they both want to mess with the routing table.
Tailscale natively supports Mullvad: https://tailscale.com/mullvad


Okay it was a little hard to read since your post was missing formatting. TS_SUBNETS is what controls what CIDRs are announced through Tailscale. Since you’re not using Docker networking for Jellyfin, it would be whatever subnet the host is on. Maybe it’s 192.168.x.y


Gluetun doesn’t make any sense here. You’re forcing all the traffic for from Jellyfin to go through Mullvad, but you need to be able to connect to Jellyfin because Jellyfin is a service you connect to.
Since your Tailscale is host network mounted, you’ll be able to expose your Docker network subnets over Tailscale then access Jellyfin. This is done via the TS_SUBNETS env variable. Docker will use a 172.16.0.0/12 subnet.
You probably intend to gluetun your downloading software, not Jellyfin.
Your options are to run smaller models or wait. llama3.2:3b fits on my 1080 Ti VRAM and is sufficiently fast. Bigger models will get split between VRAM and RAM and run slower but it’ll work.
Not all models are Gen AI style LLMs. I run GPU based speech to text models on my GPU too for my smart home.


I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.
I’m explaining what the technical problems are with your idea. It seems like you don’t fully understand the technical details of these networking protocols and that’s okay but I’ve summarized a few non trivial technical problems that aren’t just people keeping multicast from being used. I assure you if multicast worked, big tech would want to use it. For example, Netflix would want to use it to distribute content to their CDN boxes and save tons of bandwidth.


I don’t know who they is in the case, but let’s think about this for a minute.
Technically what do you need for this to work?
How many Multicast Addresses do you need? How are multicast addresses assigned? Can anybody write to any multicast address? How do I decide that 239.53.244.53 is for my file not your movie? How do we know who is listening? This is effectively BGP, but more tricky because depending on the answer to the previous question you may not benefit from any network block sizes to reduce the routing info being shared. How do you decide when to start transmitting a file? Is anybody listening? Does anybody care?
You seem latched on to assume that technically would work and haven’t asked if it is actually technically a good solution. P2P is going to work better than multicast


Multicast addresses are handled specially in routers and switches all over the world.
Changing that would require massive firmware updates everywhere to get this to work and we can’t even get people to adopt IPv6. Nevermind the complexity in figuring out to how manage IGMP group membership at the Internet scale.
Given the complexity with either change, its better to adopt IPv6 and use PeerTube. Multicast at the Internet scale won’t work and IPv6 is less work


Assuming multicast worked across the internet, it’s not going to work in practice. Multicast works by sending a packet and fanning it out to all receivers.
It works with broadcast TV like IPTV because everybody is watching the same few set of channels at the same time, but on YouTube I can watch any video at any time. How does a mythical Transmitter know what video packets to send when? Are they on loop? Are clients receiving packets for videos they don’t care about?
You might be interested in PeerTube which uses unicast peer to peer to distribute videos in a way that works.


I use a variant of this: https://github.com/linuxserver/docker-wireguard
You don’t need two different containers for this. They’re going to either fight each other for control over the networking tables or run wireguard in wireguard


So I had a chance to try this out. It wasn’t on Google Play Store, only F-Droid. There isn’t really SSO support, you either login with User/Password or a token. Instead, I login with my browser, get the token and paste it in. That works fine, but an ideal world is just pop up an browser WebView and go through the flow, then grab the token. Maybe it was intentional, but PaperlessShare registered as an Open handler for PDFs and the share menu, whereas this is only share menu. This seems to mean that I need to grant file access, whereas the open handler didn’t need that I think.
Overall, it does the job and gets my docs uploaded.
If the app is just a WebView wrapper around the application, then the challenge page would load and try to be evaluated.
If it’s a native Android/iOS app, then it probably wouldn’t work because the app would try to make HTTP API calls and get back something unexpected.