Differences between revisions 34 and 35
Revision 34 as of 2007-08-24 09:34:38
Size: 13131
Comment: added history about my work re:this
Revision 35 as of 2007-08-24 09:36:38
Size: 13221
Comment:
Deletions are marked like this. Additions are marked like this.
Line 144: Line 144:
Sometime early in 2007 discussions revealed that this was still an interested and unimplemented idea. CalebPhillips attempted to pick up where Jimmy et al. had left off, but
simplifying the design generally (pragmatically?) and continuing to use OpenVPN. JasonMcArthur et al. continued working on using OLSR for tunnels around ArborLodge, but, mostly
due to ignorance
, Caleb stuck with OpenVPN. Although OpenVPN worked fairly well for the purpose, it has it's limits...
Sometime early in 2007 discussions revealed that this was still an interesting and unimplemented idea. CalebPhillips attempted to pick up where Jimmy et al. had left off, but
with a simpler design (mostly because the focus was solving the NodesBehindNAT problem and this was the clearest path to that goal) and continuing to use OpenVPN.
JasonMcArthur et al. continued using OLSR for tunnels around ArborLodge, but, mostly due to ignorance (and laziness), Caleb stuck with OpenVPN.
Although OpenVPN worked fairly well for the purpose, it has it's limits...

Personal Telco VPN

TableOfContents

Overview

This is a project which aims to integrate ["VPN"] technology with ["PTPnet"], focusing primarily on permanent IP-over-IP tunnels created with ["OpenVPN"]. Some history and background are available, as are references to configuration data. Keep in mind the instructions here are only geared towards systems running DebianLinux; someone should probably add notes for other types of node.

Goals

Aside from being a cool idea and a fascinating problem domain from a technical perspective, there are a couple of practical benefits to the project.

  1. Simplified maintenance - Some nodes are trapped behind unfriendly routers doing cone NAT, which prevents the NetworkOperationsTeam from working on them in the usual way. One example is NodeLuckyLab, but there are more than a couple of these out there: NodesBehindNat

  2. Universal connectivity - it would be ideal if all of our different locations were connected, via radio, laser, fiber, Ethernet, frame relay circuit or whatever you like, forming one big ["PTPnet"] cloud. Unfortunately, the fiber-backed wireless dream mesh isn't quite blanketing the world yet. However, in the meantime we can achieve a similar effect with tunnels.

    • One related idea would be allowing more users from outside our network to tunnel in, participating as VPN clients. Think PicoPeer.

      • Technically, quite easy to do, with the foundation that is already in place. It could be set up a number of ways.
      • Several tunnels exist but currently they are all between nodes and specific designated servers; anyone can access the network but only if they are physically present at one of the integrated nodes, which needlessly limits the potential usefulness of the network.
      • What about connecting networks that are not nodes? For example, a home with no wireless network, or a block of servers at a company..
  3. Redundancy - More connections mean more bandwidth for everyone. Even tunnels have the potential to supplement direct links. For example:

    • Additional bandwidth could be gained in situations where multiple routes through different interfaces are available.
    • Fault tolerance is also possible; traffic can be redirected to another path if one interface fails, reducing or even eliminating service interruptions..
  4. IPv6 deployment - Tunnel brokers are effectively the only way that most people in this area can obtain significant IPv6 connectivity. It's better than none at all but these tunnels tend to be unreliable and often suffer from high latency. We have especially if we establish a BGP peer relationship or two at the Internet border rather than falling back to another broker tunnel.

  5. Education - Tunnels give us an opportunity to start acquiring practical knowledge about how to deal with increasing scale in a wide area network, which is going to be invaluable as we begin facing those problems with physical networks.

    • Similarly, we could potentially get a head start on exploring and building potential applications to run on these networks.

Status

The OLSR httpinfo plugin is running on the server: http://donk.personaltelco.net/olsr/

Also, a graph representing the current OLSR topology from the server's perspective can be viewed [http://donk.personaltelco.net/olsr_topology.png here]. This image is regenerated every 15 minutes.

Currently, the following clients are configured with tunnels:

Host

Node

["PTPnet"] DNS name

ballista

NodeHawthorne

ballista.hawthorne.ptp

beast

["NodeTB151"]

beast.tb.ptp

bone

CoreServer

bone.ptp

bowser

["NodeLuckyLabNW"]

bowser.luckylabnw.ptp

buffalogap

NodeBuffaloGap

buffalogap.buffalogap.ptp

cantos

NodePowellsTech

cantos.powellstech.ptp

chevy

NodeMississippi

chevy.mississippi.ptp

circe

NodeWorldCup

circe.worldcup.ptp

cornerstone

CoreServer

cornerstone.ptp

cycle

NodeUglyMug

cycle.uglymug.ptp

dryrot

NodeCommunitecture

dryrot.communitecture.ptp

filth

NodeCedarHillsCrossing

filth.cedarhills.ptp

frick

NodeAnnaBannanas

frick.annabannanas.ptp

grank

["NodeUrbanGrindNW"]

grank.ugpearl.ptp

kong

NodeAnnaBannanasStJohns

kong.annabannanasstjohns.ptp

lester

NodeDivision34

lester.division34.ptp

liberace

NodeFreshPot

liberace.freshpot.ptp

loki

NodeCrowBar

loki.crowbar.ptp

number-one

NodeEcotrust

number-one.ecotrust.ptp

overlook

NodeWestover

overlook.westover.ptp

ramona

NodeHollywood

ramona.hollywood.ptp

serv0

["NodeTB"]

none yet

spartan

CoreServer

spartan.ptp

thehut

NodeOldTownPizza

thehut.oldtownpizza.ptp

vinge

NodePowellsBooks

vinge.powellsbooks.ptp

yeast

NodeRedWing

yeast.redwing.ptp

zeus

NodeBasementPub

zeus.basementpub.ptp

How to Help

These are a few of the things that still need to be done:

  • Compatible OLSR configuration for clients is simple but still needs to be documented.
  • We still need to configure some of the NodesBehindNat as clients.

  • Operating systems other than DebianLinux should be supported, with documented configuration instructions.

  • Eventually it would be nice to have all of the systems accounted for by the NodeAudit configured as clients.

  • People who are not members of the NetworkOperationsTeam should be able to configure tunnel clients.

  • Some of the information on this page should be referenced from or moved to other pages.

Also, the "Goals" section above might provide some ideas for more ambitious projects.

Methodology

Design

For now, we are using one central server (donk) and allowing any number of clients to connect directly over TCP, in a classic hub pattern.

At some point in the future, we will likely to reach an upper limit with this design and we will have to re-evaluate our approach given the options which are available at that point. Given the nature of the system, the resource that is most likely to be exhausted first is server bandwidth.

IP addresses are allocated dynamically by OpenVPN on the server; routing is configured dynamically by OLSR.

Configuration

Currently, these directions assume that you are a member of the NetworkOperationsTeam, or at least that you have sudo access on donk, and that the client system you are configuring is running DebianLinux.

First you must generate a new private key and certificate. The following command should do the trick; replace thenode with the short hostname of the client.

ssh donk.personaltelco.net sudo mkvpnclient thenode

You will be prompted to enter information that will be incorporated into the new certificate. Accept all of the provided defaults. You need only enter an Organizational Unit Name (description) and a Common Name (FQDN).

After that, you're ready to configure the client. You may need to install the openvpn package on the client before beginning. Use the following commands, and again, be sure to replace thenode with the short hostname of the client.

ssh thenode.personaltelco.net
scp donk.personaltelco.net:`hostname`.key client.key
scp donk.personaltelco.net:`hostname`.crt client.crt
wget http://donk.personaltelco.net/vpn/ca.crt http://donk.personaltelco.net/vpn/client.conf
sudo mv ca.crt client.conf client.crt client.key /etc/openvpn
sudo chown root:root /etc/openvpn/*
sudo /etc/init.d/openvpn start client
exit

At this point, your tunnel should be working! If avahi-daemon is installed and running, you should be able to reach donk.local from the client, or thenode.local from donk.

After you're done, you should delete the private key from your home directory on the server. If you like, you can now configure OLSR and join ["PTPnet"].

History

I'd appreciate if others with different perspectives would take some time to add their own background to this section. (KeeganQuinn)

Origins

It is difficult, if not impossible, to give credit to any specific source for the idea; many WirelessCommunities have done it before.

Prior to 2006, several attempts were made at interconnecting PersonalTelco nodes over the Internet, although details are not easily found - and perhaps not particularly relevant, anyway. To be sure, the current implementation is not the first to enjoy at least a marginal degree of success.

Statically configured tunnels and routes have been used in the past, as well as OSPF dynamic routing implemented by means of Zebra and Quagga.

2006

In Summer of 2006, JimmySchmierbach and KeeganQuinn spent several weeks planning and testing a design for a virtual network with the potential to scale to serve the entire city. The results of this project were inconclusive due to problems with software stability, but a complete theory for the design was formulated. The central idea is based on a hierarchy with two tiers, referred to as supernodes and nodes. The basic idea was that all of the supernodes would be connected together with tunnels in a full mesh pattern, then each node would maintain a connection with two or more of the supernodes at all times. Fault tolerance is a central element in this design; actual potential for operation on a large scale remains untested.

Jimmy's original drawings specified the three core servers as the supernodes: cornerstone, bone and alitheia. This page previously stated that the design included the idea of supernodes each being connected to a master node (eg. donk), resulting in a hierarchy with three tiers. That is not correct: donk was never a functional part in the original design, and was never connected to the other systems during this project. It would not have provided any notable benefit in acting as an independent tier; with three supernodes in a mesh in the center and a minimum of two supernode connections per node, all of the routes supported by the system could be maintained if any one server failed.

The design is really impressive in theory, but when the time came to test it out, things didn't work out so well. The VPN clients kept taking naps and the dynamic routing daemons often got confused about the fact that we were running a mesh on a layer over their heads. Sometimes machines on the same switch, side by side, would decide that they'd prefer to talk to each other through a big chunk of Internet. This type of erratic behavior made it difficult to take the project seriously at the time.

2007

Sometime early in 2007 discussions revealed that this was still an interesting and unimplemented idea. CalebPhillips attempted to pick up where Jimmy et al. had left off, but with a simpler design (mostly because the focus was solving the NodesBehindNAT problem and this was the clearest path to that goal) and continuing to use OpenVPN. JasonMcArthur et al. continued using OLSR for tunnels around ArborLodge, but, mostly due to ignorance (and laziness), Caleb stuck with OpenVPN. Although OpenVPN worked fairly well for the purpose, it has it's limits...

Thanks largely to the success of JasonMcArthur and AaronBaer with their wireless mesh network at ArborLodge, KeeganQuinn took another look at deploying a city-wide virtual network in Summer of 2007.

It was now evident that the relevant software had reached a degree of maturity which was so clearly lacking a year before. In addition, the topology was simplified to avoid a myriad of potential issues and to speed the process of initial deployment, although a central point of failure was introduced as a consequence. It is difficult to say with certainty which of these factors had the most effect, but the result was that a reasonably stable virtual network could finally be built.

References

Related notes:

Technical details about the proposed VPN configuration: (not currently being used)


CategoryDocumentation CategoryDamnYouKeegan CategoryEducation CategoryMan CategoryNetwork

PersonalTelcoVPN (last edited 2009-10-01 11:54:05 by JasonMcArthur)