i like tailscale but i notice that i get more weird network blippy latency issues when using it. i used to always have my phone connected to my tailnet so i could use my dns, etc. but always occasionally something won’t load right and i have to refresh again couple of times.
It tended to happen a lot more when switching between wifi / cellular when leaving and entering buildings, etc.
I've found that using Tailscale on my Android phone became worlds more reliable (as far as the issues you've described) once I stopped using a custom DNS resolver on my Tailnet.
Can someone help me understand what this is vs exposing my services via MagicDNS using the tailscale Kubernetes operator? Functionally it looks like a fair amount of overlap but this solution is generic outside of Kubernetes and more baked into tailscale itself? The operator solution obviously uses kube primitives to achieve a fair amount of the features discussed here.
Very cool, I love Tailscale. I use it to connect together a VPS, desktop computer, phone, and a few laptops. My main use case is self-hosted Immich and Forgejo so this is great.
Fascinating to watch Tailscale evolve from what was (at least in my mind) a consumer / home-lab / small-business client networking product into an enterprise server-networking product.
I understand the usefulness of the feature, but find their examples weird. Are people really exposing their company's databases and web hosts on their tailnet?
If I'm getting this right it's only highly available from a network layer perspective. However if one of your nodes is still responsive but the service that you exposed on it isn't responsive there's no way for Tailscale to know and it'll route the packet just the same? It's not doing health checks like a reverse proxy would I imagine.
I'm happy to see this feature added. It's a feature that I didn't quite realize I was missing, but now that I see it described, I can understand exactly how I'll put it to use. Great work as always by the Tailscale team.
In addition, do people do so in mesh format? Seems expensive to do so for all of your machines, more often the topology I see is a relay/subnet advertisement based architecture that handles L3 and some other system handles L6/L7
This would be great if it supported wildcards for ingress controllers. A service foo would give you foo.tailYYYY.ts.net as well as *.foo.tailYYYY.ts.net.
This sounds great, I think it's exactly what I was looking for recently for hosting arbitrary services on my tailnet. I figured out a workaround where i created a wildcard certificate and dns cname record pointing to my raspberry pi on my tailnet but this could be potentially simpler
I wonder if that architecture screenshot's "MagicDNS" value is a nod to Pangolin, since they are currently working on a new Clients feature that should eventually replicate some of the core Tailscale functionality.
I recently found Tailscale when searching to control my home lab when traveling and have been amazed by how simple it is we can create a private network.
But, what found particularly interesting on that page was the following:
>" Some especially cruel networks block UDP entirely
, or are otherwise so strict that they simply cannot be traversed using STUN and ICE. For those situations, Tailscale provides a network of so-called DERP (Designated Encrypted Relay for Packets) servers. These fill the same role as TURN servers in the ICE standard, except they use HTTPS streams and WireGuard keys instead of the obsolete TURN recommendations."
DERP seems like one interesting solution (there may be others!) to UDP blockages...
I have a GitHub action that uses an OAuth token to provision a new key and store it in our secrets manager as part of the workflow that provisions systems - the new systems then pull the ephemeral key to onboard themselves as they come up
It can get especially interesting when you do things like have your GitHub runners onboard themselves to Tailscale - at that point you can pretty much fully-provision isolated systems directly from GitHub Actions if you want
i like tailscale but i notice that i get more weird network blippy latency issues when using it. i used to always have my phone connected to my tailnet so i could use my dns, etc. but always occasionally something won’t load right and i have to refresh again couple of times.
It tended to happen a lot more when switching between wifi / cellular when leaving and entering buildings, etc.
Now I just don’t use it
I've found that using Tailscale on my Android phone became worlds more reliable (as far as the issues you've described) once I stopped using a custom DNS resolver on my Tailnet.
Want to use my pi-hole as DNS though.
Can someone help me understand what this is vs exposing my services via MagicDNS using the tailscale Kubernetes operator? Functionally it looks like a fair amount of overlap but this solution is generic outside of Kubernetes and more baked into tailscale itself? The operator solution obviously uses kube primitives to achieve a fair amount of the features discussed here.
Was the personal plan not always free?
I’m also curious about this since I’ve been exposing services via their experimental caddy plugin.
Very cool, I love Tailscale. I use it to connect together a VPS, desktop computer, phone, and a few laptops. My main use case is self-hosted Immich and Forgejo so this is great.
Fascinating to watch Tailscale evolve from what was (at least in my mind) a consumer / home-lab / small-business client networking product into an enterprise server-networking product.
They're morphing into a B2B centicorn, and the consumer-led tooling route was a genius path.
They provided much-needed solutions to annoying problems and did it in a way that made developers love them.
Really smart and well executed.
I understand the usefulness of the feature, but find their examples weird. Are people really exposing their company's databases and web hosts on their tailnet?
Yes I host web services for my consumption, like miniflux rss aggregator, that don’t need to be on the public internet.
Similarly I’m going to host my small business’ staging database on a home server and expose that on my tail net.
If I'm getting this right it's only highly available from a network layer perspective. However if one of your nodes is still responsive but the service that you exposed on it isn't responsive there's no way for Tailscale to know and it'll route the packet just the same? It's not doing health checks like a reverse proxy would I imagine.
I'm happy to see this feature added. It's a feature that I didn't quite realize I was missing, but now that I see it described, I can understand exactly how I'll put it to use. Great work as always by the Tailscale team.
Does anyone use Tailscale in production as the network layer between services? Would be interested about hearing experiences.
We use it for to allow us to connect in from the outside (and user to user access etc), but not for service to service connections.
In addition, do people do so in mesh format? Seems expensive to do so for all of your machines, more often the topology I see is a relay/subnet advertisement based architecture that handles L3 and some other system handles L6/L7
Works great to connect fly.io apps that are only exposed to flycast private IPv6 addresses. And I think Tailscale services will replace these.
Performance between fly.io web servers in iad region to RDS databases in us-east-1 via subnet routers has been spotty to say the least.
This would be great if it supported wildcards for ingress controllers. A service foo would give you foo.tailYYYY.ts.net as well as *.foo.tailYYYY.ts.net.
This sounds great, I think it's exactly what I was looking for recently for hosting arbitrary services on my tailnet. I figured out a workaround where i created a wildcard certificate and dns cname record pointing to my raspberry pi on my tailnet but this could be potentially simpler
I wonder if that architecture screenshot's "MagicDNS" value is a nod to Pangolin, since they are currently working on a new Clients feature that should eventually replicate some of the core Tailscale functionality.
I'm afraid it's much more sophisticated. A Pangolin has both a Tail and Scales.
I recently found Tailscale when searching to control my home lab when traveling and have been amazed by how simple it is we can create a private network.
I did not intuitively understand what Tailscale does, so I visited the following related page:
https://tailscale.com/blog/how-tailscale-works
Ah! OK, now I get it! :-)
But, what found particularly interesting on that page was the following:
>" Some especially cruel networks block UDP entirely
, or are otherwise so strict that they simply cannot be traversed using STUN and ICE. For those situations, Tailscale provides a network of so-called DERP (Designated Encrypted Relay for Packets) servers. These fill the same role as TURN servers in the ICE standard, except they use HTTPS streams and WireGuard keys instead of the obsolete TURN recommendations."
DERP seems like one interesting solution (there may be others!) to UDP blockages...
Is this like a more robust funnel?
Fantastic. So many posibilities
I just wish tailscale would allow you to use long-lived tokens for ephemeral nodes...
Short lived tokens is not always an option
You can use oauth tokens with the permissions of auth_key write to use long lived tokens to permission ephemeral nodes
I have a GitHub action that uses an OAuth token to provision a new key and store it in our secrets manager as part of the workflow that provisions systems - the new systems then pull the ephemeral key to onboard themselves as they come up
It can get especially interesting when you do things like have your GitHub runners onboard themselves to Tailscale - at that point you can pretty much fully-provision isolated systems directly from GitHub Actions if you want
I'm curious, which situations are short-lived tokens not an option?