Differences between revisions 26 and 28 (spanning 2 versions)
Revision 26 as of 2007-08-07 18:59:52
Size: 16263
Editor: KeeganQuinn
Comment: this should actually work on any DebianLinux system, not just a NuCab
Revision 28 as of 2007-08-11 02:09:32
Size: 12400
Editor: beast
Comment: reorganized to better reflect current progress
Deletions are marked like this. Additions are marked like this.
Line 11: Line 11:
== The Good == == Goals ==
Line 13: Line 13:
Aside from being a cool idea and a fascinating problem domain from a technical perspective, there are a couple of practical benefits to this: Aside from being a cool idea and a fascinating problem domain from a technical perspective, there are a couple of practical benefits to the project.
Line 15: Line 15:
 1. '''Maintenance''' - Some nodes are trapped behind unfriendly routers doing cone NAT, which prevents the NetworkOperationsTeam from working on them in the usual way. One example is NodeLuckyLab, but there are more than a couple of these out there: NodesBehindNat  1. '''Simplified maintenance''' - Some nodes are trapped behind unfriendly routers doing cone NAT, which prevents the NetworkOperationsTeam from working on them in the usual way. One example is NodeLuckyLab, but there are more than a couple of these out there: NodesBehindNat
Line 26: Line 26:
   * Similarly, we could potentially get a head start on exploring and building potential applications to run on these networks.     * Similarly, we could potentially get a head start on exploring and building potential applications to run on these networks.
Line 29: Line 29:
== Goals == == Status ==
Line 31: Line 31:
=== The polished brass version === The OLSR `httpinfo` plugin is running on the server: http://donk.personaltelco.net/olsr/
Line 33: Line 33:
Whenever someone raises the issue of tunnels in a discussion and there's a new initiative to start creating VPN tunnels, the conclusion is always that we need to take care of the NodesBehindNat. It's an easy decision to reach by committee, since it allows you to completely overrule any naysayer Scrooge types by playing the security card, and the folks who just really want a network get told it'll happen sometime soon. Everyone's happy. It's happened the same way many times, usually with different groups of people; rather than listing the cast for the whole series or even just the latest episode, just give yourself a pat on the back if you've ever been one of us. Also, the current OLSR topology from the server's perspective can be viewed [http://donk.personaltelco.net/olsr_topology.png here]. This image is regenerated every 15 minutes.
Line 35: Line 35:
The rationale is generally that those systems stand to gain the most at first. While this is completely true, the idea has actually slowed down the progress of VPN deployment overall, as a result of the very same factor which is always thought will speed it along: these nodes aren't accessible except to a person with a laptop who must physically travel to each one. The amount of work is multiplied; it's not just a matter of a couple of hours with a terminal - it can take days of traveling all over town trying to make the right things happen. Currently, the following clients are configured with tunnels:
Line 37: Line 37:
The idea that these nodes have a greater need for accessibility improvements has real merit, but there is no reason they should be handled with exclusive priority over other potential nodes. In particular, the process of establishing an effective baseline configuration is greatly simplified when you have the ability to bring the interface up and down freely. || '''Host''' || '''Node''' || '''["PTPnet"] DNS name''' ||
|| ballista || NodeHawthorneHostel || `ballista.hawthorne.ptp` ||
|| beast || ["NodeTB151"] || `beast.tb.ptp` ||
|| bone || CoreServer || `bone.ptp` ||
|| buffalogap || NodeBuffaloGap || `buffalogap.buffalogap.ptp` ||
|| cantos || NodePowellsTech || `cantos.powellstech.ptp` ||
|| chevy || NodeMississippi || `chevy.mississippi.ptp` ||
|| cornerstone || CoreServer || `cornerstone.ptp` ||
|| cycle || NodeUglyMug || `cycle.uglymug.ptp` ||
|| filth || NodeCedarHills || `filth.cedarhills.ptp` ||
|| frick || NodeAnnaBannanas || `frick.annabannanas.ptp` ||
|| kong || NodeAnnaBannanasStJohns || none yet ||
|| lester || NodeDivision34 || `lester.division34.ptp` ||
|| loki || NodeCrowBar || `loki.crowbar.ptp` ||
|| number-one || NodeEcotrust || `number-one.ecotrust.ptp` ||
|| serv0 || ["NodeTB"] || none yet ||
|| thehut || NodeOldTownPizza || `thehut.oldtownpizza.ptp` ||
|| vinge || NodePowellsBooks || `vinge.powellsbooks.ptp` ||
|| zeus || NodeBasementPub || `zeus.basementpub.ptp` ||
Line 39: Line 57:
=== In the trenches ===

We've got a whole bunch of nodes that need to get hooked up. A complete list needs to be created or located; NodeAudit is probably a good starting point, and maybe the best thing we have. It is unlikely that there is even one node that is completely current in all of the aspects discussed here, although only a few will need all of these updates. Every single active node is different, which complicates matters when you want to start thinking about them as a group to simplify management, or even if you just want to provide useful directions about how they work. Connecting outdated systems also presents a serious threat to the continued viability of the network; in the past, version compatibility problems have brought the entire project to a halt after we chose to ignore them.

Some work has already been done and more details will be provided on this page when possible. Anyone who is part of the NetworkOperationsTeam is invited to participate as much as they are able. Others who are interested should contact a team member and ask about joining.
Other hosts were also working recently but need minor configuration updates to match the current server configuration; see the `client.conf` example below.
Line 46: Line 60:
== Preparation == == How to Help ==
Line 48: Line 62:
=== Guidelines === These are a few of the things that still need to be done:
Line 50: Line 64:
Be patient; this is probably the most significant single task that this group has ever attempted, and we'll be lucky if we get them all by 2008. Please don't begin the task unless you are prepared to fix whatever you break, even if it means ripping the box out of the dusty hole (nearly all nodes live in dusty holes), bringing it to someone who can fix it, watching them do it, then bringing it back to the dusty hole. It's a great learning experience, and you're not likely to make the same mistake twice after it costs you.  * Compatible OLSR configuration for clients is simple but still needs to be documented.
 * We still need to configure some of the NodesBehindNat as clients.
 * Operating systems other than DebianLinux should be supported, with documented configuration instructions.
 * Eventually it would be nice to have all of the systems accounted for by the NodeAudit configured as clients.
 * People who are not members of the NetworkOperationsTeam should be able to configure tunnel clients.
 * Some of the information on this page should be referenced from or moved to other pages.
Line 52: Line 71:
If you are doing upgrades remotely, it can be helpful to make phone contact with someone who has access to the system; a pair of hands on site can make quite a difference in some situations. If you are on site, it's a good idea to introduce yourself and explain that you are going to be working on the system. If everything goes well, the only noticeable effect will likely be that people will be trapped by the captive portal sooner than usual when it is rebooted. Some nodes can also slow down when new software is being downloaded and installed.

It sometimes happens that things do not go so well. Some of these systems were not configured with major upgrades in mind, and some were only barely adequate to begin with, so keep on the look-out for full hard disks and partitions. Check for free space before you begin downloading with `df -h`; anything less than 50M available on each partition can cause problems. You can keep an eye on file sizes when running `aptitude` if you are cutting it close. If you get stuck with a full disk, do the best you can to get the system to keep serving up Internet access and make a note of it on this page. Many nodes could use new disks, which means a fresh installation, but unfortunately there is no current documentation about that process. If you think a node needs a new disk, contact the NetworkOperationsTeam and we'll figure out a solution. As if hard disk issues were not enough of a problem, anecdotal evidence suggests that several nodes tend to just crash or break without warning from time to time; because we're neglecting them, because their operational environment is partially submerged and the water is moonlighting as part of a 220V circuit or just because some of them are really crappy old computers.

Be ready for anything if you're going to attempt the process at all. For most of the process, you can stop any time and no harm will be done - even so, don't be timid, just beat the thing up by running through the commands until you're pretty sure there are no more steps - then it'll either be really properly broken or working perfectly. Computers are pretty good at remembering stuff; if you end up on a node that's been done already, or you run the same command twice or skip one, it's just going to tell you that, and usually with this type of stuff, it will refuse to do it again. It will also refuse to do stuff at all if it's something that it doesn't like. Anyway, just dive in and have fun, and you'll have the quirks figured out in no time.


=== Tasks ===

With that in mind, this is a rough breakdown of things that need to happen to each DebianLinux system before that new tunnel goes hot. If you're working on a system which runs DebianLinux but is not managed by the NetworkOperationsTeam, you may want to look over this section anyway but you probably don't need to do everything listed.


1. Most systems will need to be upgraded to etch first.
   * Some of them will need to be upgraded to sarge before they can be upgraded to etch. Believe it.
   * Read the upgrade docs in the release notes and just follow the instructions. Seriously.
2. Install the etch kernel after all of the other updates and get it all running.
   * Technically this is part of the etch upgrade, but you must be certain it gets done.
   * Sometimes you'll need to select a different kernel specifically; some nodes will seem to be upgraded to etch completely while still running sarge or woody kernels. Take a look at `/boot` to get the whole story and always check `uname -a` as a confirmation.
   * We really want our shiny new VPN to work when we get it set up, and running anything other than a proper kernel can cause serious problems.
   * Really, this is very important. Contact another team member if you need help.
3. After the etch kernel is running, install and configure any software that isn't already taken care of.
   * OpenVPN (current etch version)
   * olsrd (backported version; see DebianAptSource)
   * osirisd (current etch version)
   * use only the versions indicated or you'll wish you had.
4. Make sure it all works as well as you're able. Clean up as much as possible. Remove old packages and even consider another reboot just to be sure all of the configuration is working correctly.


These are some pretty mundane goals, and they rather make it sound like someone is paying us or something. Be that as it may, this process should provide us with a level of connectivity far beyond anything we have done before. To build upon this foundation, some more interesting ideas (although also more long-term) are described in the section about benefits, above.
Also, the "Goals" section above might provide some ideas for more ambitious projects.
Line 89: Line 80:
=== Scalability === At some point in the future, we will likely to reach an upper limit with this design and we will have to re-evaluate our approach given the options which are available at that point. Given the nature of the system, the resource that is most likely to be exhausted first is server bandwidth.
Line 91: Line 82:
At some point in the future, we will likely to reach an upper limit with this design and we will have to re-evaluate our options given the options which are available at that point. Given the nature of the system, the resource that is most likely to be exhausted first would be bandwidth. IP addresses are allocated dynamically by OpenVPN on the server; routing is configured dynamically by OLSR.
Line 95: Line 86:
==== Just checking ==== Currently, these directions assume that you are a member of the NetworkOperationsTeam, or at least that you have `sudo` access on donk, and that the client system you are configuring is running DebianLinux.
Line 97: Line 88:
Oh, you actually want to set up a link?

Are you absolutely positive that everything has been upgraded and the new kernel is running?


==== OpenVPN ====

To generate a new keypair for a client do something like this:
First you must generate a new private key and certificate. The following series of commands should do the trick; be sure to replace `you` with your username and `thenode` with the hostname of the client.
Line 107: Line 91:
ssh you@donk ssh you@donk.personaltelco.net
Line 115: Line 99:
chown you:you ~/thenode.crt ~/thenode.key
Line 118: Line 103:
Next, you must configure the client. Do something like: After this initial setup is done on the server, you're ready to configure the client. Note that current versions of the `personal-telco-router` package depend on `openvpn`, so it may not be necessary to install it if you're dealing with a node managed by the NetworkOperationsTeam and everything is up-to-date. In this case, you can skip both of the steps which call `aptitude`.
Line 122: Line 107:
sudo apt-get update
sudo apt-get install openvpn
cd /etc/openvpn
sudo scp you@donk:thenode.* .
sudo scp you@donk:/etc/openvpn/keys/ca.crt .
scp donk.personaltelco.net:thenode.* .
scp donk.personaltelco.net:/etc/ssl/certs/ca.crt .
sudo aptitude update
sudo aptitude install openvpn
sudo mv thenode.key thenode.crt ca.crt /etc/openvpn
sudo chown root:root /etc/openvpn/*
Line 128: Line 114:

After the key and certificate have been copied to the client, you should delete them from the server.
Line 150: Line 138:
You can now install the `olsrd-plugins` package, configure OLSR and join ["PTPnet"]. If you like, you can now install the `olsrd-plugins` package, configure OLSR and join ["PTPnet"].
Line 153: Line 141:
== Address Allocation == == History ==
Line 155: Line 143:
=== Servers === I'd appreciate if others with different perspectives would take some time to add their own background to this section. (KeeganQuinn)
Line 157: Line 145:
|| Server || 10.11.255.? || Port || Proto || Compression || Dev ||
|| donk || 1 || 1195/tcp || OpenVPN || lzo || tap0 ||
=== Origins ===
Line 160: Line 147:
=== Clients === It is difficult, if not impossible, to give credit to any specific source for the idea; many WirelessCommunities have done it before.
Line 162: Line 149:
Client addresses are assigned dynamically by the server. Prior to 2006, several attempts were made at interconnecting PersonalTelco nodes over the Internet, although details are not easily found - and perhaps not particularly relevant, anyway. To be sure, the current implementation is not the first to enjoy at least a marginal degree of success.
Line 164: Line 151:
Statically configured tunnels and routes have been used in the past, as well as OSPF dynamic routing implemented by means of Zebra and Quagga.
Line 165: Line 153:
=== DNS === === 2006 ===
Line 167: Line 155:
Install the `avahi-daemon` package on the client system and use mDNS. In Summer of 2006, JimmySchmierbach and KeeganQuinn spent several weeks planning and testing a design for a virtual network with the potential to scale to serve the entire city. The results of this project were inconclusive due to problems with software stability, but a complete theory for the design was formulated. The central idea is based on a hierarchy with two tiers, referred to as supernodes and nodes. The basic idea was that all of the supernodes would be connected together with tunnels in a full mesh pattern, then each node would maintain a connection with two or more of the supernodes at all times. Fault tolerance is a central element in this design; actual potential for operation on a large scale remains untested.

Jimmy's original drawings specified the three core servers as the supernodes: cornerstone, bone and alitheia. This page previously stated that the design included the idea of supernodes each being connected to a master node (eg. donk), resulting in a hierarchy with three tiers. That is not correct: donk was never a functional part in the original design, and was never connected to the other systems during this project. It would not have provided any notable benefit in acting as an independent tier; with three supernodes in a mesh in the center and a minimum of two supernode connections per node, all of the routes supported by the system could be maintained if any one server failed.

The design is really impressive in theory, but when the time came to test it out, things didn't work out so well. The VPN clients kept taking naps and the dynamic routing daemons often got confused about the fact that we were running a mesh on a layer over their heads. Sometimes machines on the same switch, side by side, would decide that they'd prefer to talk to each other through a big chunk of Internet. This type of erratic behavior made it difficult to take the project seriously at the time.

=== 2007 ===

Thanks largely to the success of JasonMcArthur and AaronBaer with their wireless mesh network at ArborLodge, KeeganQuinn took another look at deploying a city-wide virtual network in Summer of 2007.

It was now evident that the relevant software had reached a degree of maturity which was so clearly lacking a year before. In addition, the topology was simplified to avoid a myriad of potential issues and to speed the process of initial deployment, although a central point of failure was introduced as a consequence. It is difficult to say with certainty which of these factors had the most effect, but the result was that a reasonably stable virtual network could finally be built.
Line 176: Line 174:
 * NodesBehindNat
 * NetworkAddressAllocations
Line 184: Line 183:
 * NodesBehindNat
 * NetworkAddressAllocations


== History ==

In Summer of 2006, JimmySchmierbach and KeeganQuinn spent several weeks planning and testing a design for a virtual network with the potential to scale to serve the entire city. The results of this project were inconclusive due to problems with software stability, but a complete theory for the design was formulated. The central idea is based on a hierarchy with two tiers, referred to as supernodes and nodes. The basic idea was that all of the supernodes would be connected together with tunnels in a full mesh pattern, then each node would maintain a connection with two or more of the supernodes at all times. Fault tolerance is a central element in this design; actual potential for operation on a large scale remains untested.

Jimmy's original drawings specified the three core servers as the supernodes: cornerstone, bone and alitheia. This page previously stated that the design included the idea of supernodes each being connected to a master node (eg. donk), resulting in a hierarchy with three tiers. That is not correct: donk was never a functional part in the original design, and was never connected to the other systems during this project. It would not have provided any notable benefit in acting as an independent tier; with three supernodes in a mesh in the center and a minimum of two supernode connections per node, all of the routes supported by the system could be maintained if any one server failed.

The design is really impressive in theory, but when the time came to test it out, things didn't work out so well. The VPN clients kept taking naps and the dynamic routing daemons often got confused about the fact that we were running a mesh on a layer over their heads. Sometimes machines on the same switch, side by side, would decide that they'd prefer to talk to each other through a big chunk of Internet. This type of erratic behavior made it difficult to take the project seriously at the time.

It certainly seems that the software versions available now have come a long way in terms of solving these problems. The reliability and consistency of the current network is far better than has ever been accomplished in the past. However, it is difficult to say with certainty if this effect is a result of improvements in the software or the simplification of our topology. At this point, building a working network is more important than proving our triad theory; perhaps at some point in the future we will have a better opportunity to test it.

Personal Telco VPN

TableOfContents

Overview

This is a project which aims to integrate ["VPN"] technology with ["PTPnet"], focusing primarily on permanent IP-over-IP tunnels created with ["OpenVPN"]. Some history and background are available, as are references to configuration data. Keep in mind the instructions here are only geared towards systems running DebianLinux; someone should probably add notes for other types of node.

Goals

Aside from being a cool idea and a fascinating problem domain from a technical perspective, there are a couple of practical benefits to the project.

  1. Simplified maintenance - Some nodes are trapped behind unfriendly routers doing cone NAT, which prevents the NetworkOperationsTeam from working on them in the usual way. One example is NodeLuckyLab, but there are more than a couple of these out there: NodesBehindNat

  2. Universal connectivity - it would be ideal if all of our different locations were connected, via radio, laser, fiber, Ethernet, frame relay circuit or whatever you like, forming one big ["PTPnet"] cloud. Unfortunately, the fiber-backed wireless dream mesh isn't quite blanketing the world yet. However, in the meantime we can achieve a similar effect with tunnels.

    • One related idea would be allowing more users from outside our network to tunnel in, participating as VPN clients. Think PicoPeer.

      • Technically, quite easy to do, with the foundation that is already in place. It could be set up a number of ways.
      • Several tunnels exist but currently they are all between nodes and specific designated servers; anyone can access the network but only if they are physically present at one of the integrated nodes, which needlessly limits the potential usefulness of the network.
      • What about connecting networks that are not nodes? For example, a home with no wireless network, or a block of servers at a company..
  3. Redundancy - More connections mean more bandwidth for everyone. Even tunnels have the potential to supplement direct links. For example:

    • Additional bandwidth could be gained in situations where multiple routes through different interfaces are available.
    • Fault tolerance is also possible; traffic can be redirected to another path if one interface fails, reducing or even eliminating service interruptions..
  4. IPv6 deployment - Tunnel brokers are effectively the only way that most people in this area can obtain significant IPv6 connectivity. It's better than none at all but these tunnels tend to be unreliable and often suffer from high latency. We have especially if we establish a BGP peer relationship or two at the Internet border rather than falling back to another broker tunnel.

  5. Education - Tunnels give us an opportunity to start acquiring practical knowledge about how to deal with increasing scale in a wide area network, which is going to be invaluable as we begin facing those problems with physical networks.

    • Similarly, we could potentially get a head start on exploring and building potential applications to run on these networks.

Status

The OLSR httpinfo plugin is running on the server: http://donk.personaltelco.net/olsr/

Also, the current OLSR topology from the server's perspective can be viewed [http://donk.personaltelco.net/olsr_topology.png here]. This image is regenerated every 15 minutes.

Currently, the following clients are configured with tunnels:

Host

Node

["PTPnet"] DNS name

ballista

NodeHawthorneHostel

ballista.hawthorne.ptp

beast

["NodeTB151"]

beast.tb.ptp

bone

CoreServer

bone.ptp

buffalogap

NodeBuffaloGap

buffalogap.buffalogap.ptp

cantos

NodePowellsTech

cantos.powellstech.ptp

chevy

NodeMississippi

chevy.mississippi.ptp

cornerstone

CoreServer

cornerstone.ptp

cycle

NodeUglyMug

cycle.uglymug.ptp

filth

NodeCedarHills

filth.cedarhills.ptp

frick

NodeAnnaBannanas

frick.annabannanas.ptp

kong

NodeAnnaBannanasStJohns

none yet

lester

NodeDivision34

lester.division34.ptp

loki

NodeCrowBar

loki.crowbar.ptp

number-one

NodeEcotrust

number-one.ecotrust.ptp

serv0

["NodeTB"]

none yet

thehut

NodeOldTownPizza

thehut.oldtownpizza.ptp

vinge

NodePowellsBooks

vinge.powellsbooks.ptp

zeus

NodeBasementPub

zeus.basementpub.ptp

Other hosts were also working recently but need minor configuration updates to match the current server configuration; see the client.conf example below.

How to Help

These are a few of the things that still need to be done:

  • Compatible OLSR configuration for clients is simple but still needs to be documented.
  • We still need to configure some of the NodesBehindNat as clients.

  • Operating systems other than DebianLinux should be supported, with documented configuration instructions.

  • Eventually it would be nice to have all of the systems accounted for by the NodeAudit configured as clients.

  • People who are not members of the NetworkOperationsTeam should be able to configure tunnel clients.

  • Some of the information on this page should be referenced from or moved to other pages.

Also, the "Goals" section above might provide some ideas for more ambitious projects.

Methodology

Design

For now, we are using one central server (donk) and allowing any number of clients to connect directly over TCP, in a classic hub pattern.

At some point in the future, we will likely to reach an upper limit with this design and we will have to re-evaluate our approach given the options which are available at that point. Given the nature of the system, the resource that is most likely to be exhausted first is server bandwidth.

IP addresses are allocated dynamically by OpenVPN on the server; routing is configured dynamically by OLSR.

Configuration

Currently, these directions assume that you are a member of the NetworkOperationsTeam, or at least that you have sudo access on donk, and that the client system you are configuring is running DebianLinux.

First you must generate a new private key and certificate. The following series of commands should do the trick; be sure to replace you with your username and thenode with the hostname of the client.

ssh you@donk.personaltelco.net
sudo -s
cd /etc/ssl/easy-rsa
. vars
./build-key thenode
cp keys/thenode.crt /etc/openvpn/keys/
cp keys/thenode.crt ~
mv keys/thenode.key ~
chown you:you ~/thenode.crt ~/thenode.key
exit

After this initial setup is done on the server, you're ready to configure the client. Note that current versions of the personal-telco-router package depend on openvpn, so it may not be necessary to install it if you're dealing with a node managed by the NetworkOperationsTeam and everything is up-to-date. In this case, you can skip both of the steps which call aptitude.

ssh you@thenode
scp donk.personaltelco.net:thenode.* .
scp donk.personaltelco.net:/etc/ssl/certs/ca.crt .
sudo aptitude update
sudo aptitude install openvpn
sudo mv thenode.key thenode.crt ca.crt /etc/openvpn
sudo chown root:root /etc/openvpn/*

After the key and certificate have been copied to the client, you should delete them from the server.

Create the client configuration file at /etc/openvpn/client.conf:

client
remote donk.personaltelco.net 1195
proto tcp-client
dev tap
ca /etc/openvpn/ca.crt
cert /etc/openvpn/thenode.crt
key /etc/openvpn/thenode.key
comp-lzo

And finally, start OpenVPN on the client-side:

sudo /etc/init.d/openvpn start client

At this point, if avahi-daemon is installed and running, you should be able to reach donk.local from the client, or thenode.local from donk.

If you like, you can now install the olsrd-plugins package, configure OLSR and join ["PTPnet"].

History

I'd appreciate if others with different perspectives would take some time to add their own background to this section. (KeeganQuinn)

Origins

It is difficult, if not impossible, to give credit to any specific source for the idea; many WirelessCommunities have done it before.

Prior to 2006, several attempts were made at interconnecting PersonalTelco nodes over the Internet, although details are not easily found - and perhaps not particularly relevant, anyway. To be sure, the current implementation is not the first to enjoy at least a marginal degree of success.

Statically configured tunnels and routes have been used in the past, as well as OSPF dynamic routing implemented by means of Zebra and Quagga.

2006

In Summer of 2006, JimmySchmierbach and KeeganQuinn spent several weeks planning and testing a design for a virtual network with the potential to scale to serve the entire city. The results of this project were inconclusive due to problems with software stability, but a complete theory for the design was formulated. The central idea is based on a hierarchy with two tiers, referred to as supernodes and nodes. The basic idea was that all of the supernodes would be connected together with tunnels in a full mesh pattern, then each node would maintain a connection with two or more of the supernodes at all times. Fault tolerance is a central element in this design; actual potential for operation on a large scale remains untested.

Jimmy's original drawings specified the three core servers as the supernodes: cornerstone, bone and alitheia. This page previously stated that the design included the idea of supernodes each being connected to a master node (eg. donk), resulting in a hierarchy with three tiers. That is not correct: donk was never a functional part in the original design, and was never connected to the other systems during this project. It would not have provided any notable benefit in acting as an independent tier; with three supernodes in a mesh in the center and a minimum of two supernode connections per node, all of the routes supported by the system could be maintained if any one server failed.

The design is really impressive in theory, but when the time came to test it out, things didn't work out so well. The VPN clients kept taking naps and the dynamic routing daemons often got confused about the fact that we were running a mesh on a layer over their heads. Sometimes machines on the same switch, side by side, would decide that they'd prefer to talk to each other through a big chunk of Internet. This type of erratic behavior made it difficult to take the project seriously at the time.

2007

Thanks largely to the success of JasonMcArthur and AaronBaer with their wireless mesh network at ArborLodge, KeeganQuinn took another look at deploying a city-wide virtual network in Summer of 2007.

It was now evident that the relevant software had reached a degree of maturity which was so clearly lacking a year before. In addition, the topology was simplified to avoid a myriad of potential issues and to speed the process of initial deployment, although a central point of failure was introduced as a consequence. It is difficult to say with certainty which of these factors had the most effect, but the result was that a reasonably stable virtual network could finally be built.

References

Related notes:

Technical details about the proposed VPN configuration: (not currently being used)


CategoryDocumentation CategoryDamnYouKeegan CategoryEducation CategoryMan CategoryNetwork

PersonalTelcoVPN (last edited 2009-10-01 11:54:05 by JasonMcArthur)