

Hello fellow midwesterner!
Little bit of everything!
Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms


Hello fellow midwesterner!


Exactly this. People here mass downvote but I personally find AI to be extremely useful… To do things I already know how to do but don’t have the time for. I don’t trust it to do things I can’t spot the errors in


I’ve literally gotten jobs because of my homelabbing. Being able to talk about how I run services and why I enjoy them has helped more than most of my professional work


Sure! I use Kaniko (Although I see now that it’s not maintained anymore). I’ll probably pull the image in locally to protect it…
Kaniko does the Docker in Docker, and I found an action that I use, but it looks like that was taken down… Luckily I archived it! Make an action in Forgejo (I have an infrastructure group that I add public repos to for actions. So this one is called action-koniko-build and all it has is this action.yml file in it:
name: Kaniko
description: Build a container image using Kaniko
inputs:
Dockerfile:
description: The Dockerfile to pass to Kaniko
required: true
image:
description: Name and tag under which to upload the image
required: true
registry:
description: Domain of the registry. Should be the same as the first path component of the tag.
required: true
username:
description: Username for the container registry
required: true
password:
description: Password for the container registry
required: true
context:
description: Workspace for the build
required: true
runs:
using: docker
image: docker://gcr.io/kaniko-project/executor:debug
entrypoint: /bin/sh
args:
- -c
- |
mkdir -p /kaniko/.docker
echo '{"auths":{"${{ inputs.registry }}":{"auth":"'$(printf "%s:%s" "${{ inputs.username }}" "${{ inputs.password }}" | base64 | tr -d '\n')'"}}}' > /kaniko/.docker/config.json
echo Config file follows!
cat /kaniko/.docker/config.json
/kaniko/executor --insecure --dockerfile ${{ inputs.Dockerfile }} --destination ${{ inputs.image }} --context dir://${{ inputs.context }}
Then, you can use it directly like:
name: Build and Deploy Docker Image
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: docker
steps:
# Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
- name: Get current date # This is just how I label my containers, do whatever you prefer
id: date
run: echo "::set-output name=date::$(date '+%Y%m%d-%H%M')"
- uses: path.to.your.forgejo.instance:port/infrastructure/action-koniko-build@main # This is what I said above, it references your infrastructure action, on the main branch
with:
Dockerfile: cluster/charts/auth/operator/Dockerfile
image: path.to.your.forgejo.instance:port/group/repo:${{ steps.date.outputs.date }}
registry: path.to.your.forgejo.instance:port/v1
username: ${{ env.GITHUB_ACTOR }}
password: ${{ secrets.RUNNER_TOKEN }} # I haven't found a good secret option that works well, I should see if they have fixed the built-in token
context: ${{ env.GITHUB_WORKSPACE }}
I run my runners in Kubernetes in the same cluster as my forgejo instance, so this all hooks up pretty easy. Lmk if you want to see that at all if it’s relevant. The big thing is that you’ll need to have them be Privileged, and there’s some complicated stuff where you need to run both the runner and the “dind” container together.


Forgejo runners are great! I found some simple actions to do docker in docker and now build all my images with them!


Good note, and good callout, we should always call out these things.
But yes if you’re self hosting and you both have a public facing instance and allow open registration, you are a much much braver person than I.


Okay that makes so much sense, because I knew I had calling before in Element but they wanted me to set up all this extra stuff. Is it still a thing to do the plugin?


Wait there’s a jitsi plugin?


Element on Matrix is the only one I’m aware of - but it’s not the easiest to set up. I would try creating an account on matrix.org’s server just temporarily to try it out and see if it fits what you’re looking for. I like the decentralized nature of it, but the support is very piecemeal, and onboarding people essentially needs a class.


Holy setup batman. Was thinking it was going to be another container I spin up, but it’s enabling kernel modules, needs IOMMU, needs a ton of setup and then it looks like you still have to compile it? For now at least that’s above my needs


but the vast majority of crawlers don’t care to do that. That’s a very specific implementation for this one problem. I actually did work at a big scraping farm, and if they encounter something like this,they just give up. It’s not worth it to them. That’s where the “worthiness” check is, you didn’t bother to do anything to gain access.


That’s counting on one machine using the same cookie session continuously, or they code up a way to share the tokens across machines. That’s now how the bot farms work


I was a single server with only me and 2 others or so, and then saw that I had thousands of requests per minutes at times! Absolutely nuts! My cloud bill was way higher. Adding anubis and it dropped down to just our requests, and bills dropped too. Very very strong proponent now.


This dance to get access is just a minor annoyance for me, but I question how it proves I’m not a bot. These steps can be trivially and cheaply automated.
I don’t think the author understands the point of Anubis. The point isn’t to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it’s meant to be solved by a phone or any consumer device.
The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren’t one single host or machine, they’re thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let’s say that’s 1000 scrapes of your site, that’s now 5000 seconds to scrape, roughly an hour and a half. Now we’re talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It’s that it’s not worth it for them to scrape the site anymore.
So for Anubis they’re “judging your value” by saying “Are you willing to put your money where your mouth is to access this site?” For consumer it’s a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it’s real dollars wasted on my little lemmy instance/blog, and thankfully they’ve stopped caring.


Check out Anubis. If you have a reverse proxy it is very easy to add, and for the bots stopped spamming after I added it to mine


This looks great! Thank you for the recommendation!


Interesting, had no idea!


My mind went to this one



Best we continue to change nothing because what’s the point! Here’s my credit card fill 'er up please and thanks!
For everyone here, this is true, and I’d recommend buying a couple drives for quick replacements. If you don’t you’re gambling that 1) a replacement drive will be available and 2) that it will be affordable.
Keep a few spares lying around while we get through the ebbs and flows of the market. As K said, a person is smart, people are dumb and panicky. If ssd and storage prices rise, people panic and our hdds will rise too.
Prep now and thank yourself later.