Email: omega@temple-baptist.com<
>
Homepage / Main Project: [[http://gstreamer.net/|GStreamer]]<
>
Interests: Audio, Video, etc.<
>
Map: [[http://www.mapquest.com/cgi-bin/ia_find?link=btwn%2Ftwn-map_results&random=565&event=find_search&SNVData=&address=2926+NE+58th+Ave&city=Portland&State=OR&Zip=97213&Find+Map.x=47&Find+Map.y=1|Map]]<
>
Node pictures: [[http://www.omegacs.net/~omega/ptp/nodeviews]]
I'm a Portland native, locate on NE 58th a block north of Sandy. I've been running wireless at work and home (now the same thing) for quite some time. My first experiences with wireless were about three years ago with 900MHz Wavelan PCMCIA cards out at Oregon Graduate Institute (where I worked). Promptly graduated to 2.4GHz, and soon got my hands on 72 ISA wavelan cards (most of which are sold now). Ran that at home until OGI got some 2Mbps 802.11 cards, and then 6Mbps cards (pre-802.11, really...). I got a pair of barely pre-Orinoco cards for home and moved to 11Mbps, which is where I am now. Also have a Linksys card from work (http://www.ridgerun.com/) that's somewhat tied to the iPAQ I've got.
I've worked with Jim Binkley (from PSU, the MobileIP guru) some, including taking a class from him on routing, where my (unfinished) class project was to write the mobile ad-hoc link-layer routing demon (very similar to AODV/madhoc) for Linux.
I have yet to put up a permanent antenna, but will try begin the process this next weekend (Aug 11,12, 2001) with a mounting mast. I've messed with getting the old ISA cards to go longer distance, and had success with a pair of tin cans going somewhere around 100 yards (could probably go more, but the PCMCIA cards can't take external antennae). Project was abandoned when a tree came between the only two viable mounting locations. Neighbor in question is going back to college in a week, but his Dad is a Ham, with a large mast in the backyard that would make an idea omni cell location, if I can convince him
I'm interested in the concept of setting up a highly ad-hoc network of wireless sites, consisting mostly of omni antennae if at all possible. Each cell center should be able to talk to at least one other cell center (omni to omni, which we'll find out at the field day), and ideally most would have DSL or other connections as well. Of interest also is whether a single card can drive both the omni and a unidirectional antenna to talk to more distant cell.
'''''Some thoughts on network design and sustainability'''''
Do you want to move this to another web page so people can comment without trashing your home page :-) Maybe a NetworkDesign page? -- AdamShand
I've been thinking about what's required for a network like this to a) exist, and b) prosper.
'''Network Design'''
From the design perspective, I would propose a tiered architecture similar to the [[http://seattlewireless.net/|SeattleWireless]] [[SeattleWireless:AxNode|AxNode]]/[[SeattleWireless:BxNode|BxNode]]/[[SeattleWireless:CxNode|CxNode]] idea, but a little lighter on the hardware side, I hope, depending on the outcome of some of the PlayDay experiments.
The most complex node would contain two or more 802.11b cards with sector antennae, mounted on a tower. The second-tier node would have a directional antenna pointed at either a first- or second-tier node, and an omni to create a microcell. Third-tier nodes would be machines that participate in a madhoc mesh, helping route between nodes that are not in direct contact with a microcell or macrocell. Fourth-tier nodes (clients) would just connect to the madhoc mesh, not actually route in it (like Windoze boxen).
A key point to make is that each of these nodes may or may not have a link to the Internet (including first-tier nodes...). The other is that if we can get madhoc to work right (with significant improvements and enhancements, most likely), there need be no explicit routing done at all, and possibly not even explicit addressing, though that could prove to be a problem.
All the nodes in the network would live in the 10.x.x.x range, with some geographic distribution of subnets. This would have to be coupled with something like [[SeattleWireless:IntraIntra|IntraIntra-Network Routing Protocol]], in order to let the various nodes talk to other intra networks in other cities, for instance.
A routing mechanism must be designed whereby every that has a link to the Internet (DSL, T1, etc.) runs a daemon that controls the IP policy routing on that node. The general idea would be to keep track of wireless and wired link utilization, bandwidth utilization policies, and costs, to route things down to possibly individual connections through different paths as time passes. Coupled with user-space proxies/caches for common things like DNS and HTTP, this would enable the network to maintain policy and effectiveness automatically. This is a research problem, and as such it would be very useful to try to get JimBinkley and others at [[PSU]] involved.
'''Sustainability'''
The problem with such a network is that it relies on small connections to the Internet, each of which is paid for by individuals. Such a network will do wonders for web surfing (due to a potentially large distributed HTTP cache system) and other intermittent and relatively low-bandwidth services (DNS, mail, etc.), but cannot scale to a full-blown network without significant routing to the rest of the world.
My proposal is that, while the network itself is maintained as a collection of hardware owned by various individuals and not-for-profit groups (potentially some kind of holding entity for larger investments), and some level of service is always available through sharing of personal DSL connections, there has to be commercial backing.
I've always maintained that there must be a significant and enforced distinction between network infrastructure/last-mile and actual service providers. When the two are the same company, you have massive beasts like AOL/Time-Warner, which control both last-mile and the packets you send. This has been proven not to work very well for people who want to do anything but suck down what AOl/Time-Warner wants them to...
Therefor, we have the ability to implement a network properly: last-mile is "free" because it's handled by a cooperative, and done with wireless technology, which requires, well, no wires. Actual connectivity to the greater Internet, for most people on the network, would be a paid service, by way of a cooperating ISP.
The idea is that this ISP would provide, say, DSL connectivity at maximum throughput to various individuals that are in relatively key locations in the wireless mesh (first- and second-tier nodes are ideal candidates, but even a third-tier node can be of use, especially if it's across the street from the ISP or CO ). Those people would probably get free service in exchange for the ISP's ability to route paying traffic through their premises, and likely higher CIR/MIR than they started with.
One requirement that I would impose on any ISP that participates in this system, is that on average IP policy states that roughly 1/4 (or some other reasonable figure) of the bandwidth the ISP provides be available for public use. Also, any equipment that is installed by the ISP that is not on their own premises would be owned/controlled by someone other than the ISP, and would become a full-fledged member of the network (i.e. while the wireless link(s) themselves that connect one of their drop-points should be set up to guarantee at least the DSL's bandwidth at a minimum to the paying customers, they ''cannot'' disallow other mesh traffic).
This will require even more interesting network support software, as packets from a given node on the 10.x.x.x network must be explicitly routed to one of their ISP's connect points. That means that either every higher-level router (first- and second-tier, plus any third-tier nodes that are connection points) must cooperate with all others in the network to keep track of where packets are destined (which could change by the minute), or some tunneling/tagging protocol needs to be developed.
For instance, a client that's paying for service starts by linking to the network, and sending out a connection to the known (by local DNS) address of their ISP's auth server. The results of this is a key that their default gateway would use to somehow tag the packet and route to the ISP. The key is that the tagged packet should be able to be routed to any of the ISP's connection points, not just one of them. From there, the "normal" madhoc and other routing takes over.
----
[CategoryHomepage]