<div dir="ltr"><div dir="ltr"><div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Dec 31, 2019 at 8:45 AM Kurt H Maier <<a href="mailto:khm@sciops.net">khm@sciops.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
If you need this kind of functionality in Kubernetes you're much better<br>
off using a different CNI plugin to manage your networking. There's no<br>
inherent NAT requirement imposed by Kubernetes itself.</blockquote></div></div><div><br></div><div>This is not about CNI networking, inside cluster tftp is working pretty fine.</div><div>This is more about service networking (kube-proxy) and accessing services from outside.</div><div>Example: when you need to boot some node, on the early booting stage it can't be member of cluster.<br></div><div><br></div><div>Of course you can use hostNetwork=true, but it is less secure and not redundant.</div><div><br></div><div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
That approach is dangerously broken. The transfer IDs and the ports are<br>
supposed to match; ramming everything over a single port is going to <br>
break down when you have a lot of transfers happening simultaneously.<br></blockquote></div></div></div><div><br></div>The packets are always sending to the client specific port. There is no put requests.</div><div dir="ltr">What is actually broken? Example tcpdump:<br></div><div dir="ltr"><br></div><div>This is standard mode:</div><div dir="ltr">IP 172.17.0.2.42447 > 172.17.0.1.69: 22 RRQ "/some_file" netascii<br>IP 172.17.0.1.56457 > 172.17.0.2.42447: UDP, length 15<br>IP 172.17.0.2.42447 > 172.17.0.1.56457: UDP, length 4<br><br>This is single port mode:</div><div dir="ltr">IP 172.17.0.2.56296 > 172.17.0.1.69: 22 RRQ "/some_file" netascii<br>IP 172.17.0.1.69 > 172.17.0.2.56296: 15 DATA block 1<br>IP 172.17.0.2.56296 > 172.17.0.1.69: 4 ACK block 1<br><div><br></div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">- kvaps<br></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Dec 31, 2019 at 8:45 AM Kurt H Maier <<a href="mailto:khm@sciops.net">khm@sciops.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Mon, Dec 30, 2019 at 12:51:30PM +0100, kvaps wrote:<br>
><br>
> Note that Kubernetes uses NAT for external services, so it's not possible<br>
> to run TFTP-server for external clients there. There is one proposed<br>
> solution for that, it suggests moving away from the RFC and implement<br>
> --single-port option for always reply from the same port which was <br>
> requested by the client.<br>
<br>
That approach is dangerously broken. The transfer IDs and the ports are<br>
supposed to match; ramming everything over a single port is going to <br>
break down when you have a lot of transfers happening simultaneously.<br>
<br>
If you need this kind of functionality in Kubernetes you're much better<br>
off using a different CNI plugin to manage your networking. There's no<br>
inherent NAT requirement imposed by Kubernetes itself.<br>
<br>
khm<br>
</blockquote></div></div>