December 19, 2020: Having debugged some of the problems, and returning with an ethernet tester that reports distance to fault, reinstalled the bullet ac, and strung a new ethernet run (58 feet). Left the 12V power supply in place, which works for the bullet ac, but not for the unifi-ac-mesh which needs 24V. So far, it is looking excellent, with a booming signal at both ends of the 5GHz link. The Ballroom roof is next up for upgrades. JasonBergstrom, TomasKuchta and RussellSenior participating.
December 12, 2020: Quick visit to swap out the bullet m5 for a bullet ac, however multiple failures and running out of daylight led to an abort with a regression (the ethernet run which was working when we arrived was not working when we left). JasonBergstrom, TomasKuchta, TedBrunner and RussellSenior participating.
December 9, 2020: Replaced metrix-commons with a ubiquiti rocket m5 and an dual polarity omni antenna. Replaced the ubiquiti loco m9 with a unifi-ac-mesh for local coverage. Replaced both power supplies in the telco closet. The m9 pair, left over from the link with the grand central roof might be redeployed to reinforce the link from commons to the ballroom. RussellSenior did all this work.
November 29, 2020: The new gateway device, currently numbered 10.11.104.20, consisting of a Ubiquiti ER-X, was connected to the Internet. We are currently seeing weak reception at metrix-commons from the new bullet m5, and also seeing some out-of-memory induced reboots. Currently plan to return with a new Ubiquiti Bullet AC in mid-December to replace the struggingly bullet m5hp. --RussellSenior
November 21, 2020: Installed a new device, currently numbered 10.11.104.19, consisting of a Ubiquiti Bullet M5HP, in the alley behind Mississippi Commons, pointing at the antennas of metrix-commons. This is to be connected to a new higher bandwidth gateway on the premises of the host. Installation was made by JasonBergstrom, TedBrunner, TomasKuchta and RussellSenior, with Tomas doing the ladder climbing, and TedBrunner doing the ladder transporting.
May 23, 2015: Decommissioned our equipment from the roof the the Grand Central Baking building. This provided the internet connection to the rest of the network. Reconfigured the network at NodeFreshPot to provide an alternative gateway. --RussellSenior
July 2, 2014: Visited the Mississippi Ballroom building, inspecting our equipment on the roof, trying to assess how to improve the backhaul link to Mississippi Commons.
May 6, 2014: After a couple month outage, NodeEd (4135 N Mississippi), service was restored this evening to the northern most part of the network, thanks to the generous and moderately risky efforts of our volunteer crew: EdHan, GustavSwanson, ConnorScott, JasonBergstrom, JorenLove, RussellSenior, and MatthewKlug. There were scheduling problems around getting people and cooperative weather lined up simultaneously. But tonight, in just a few hours, we pulled it all together. The same Metrix box on the same mast was reinstalled. A new ethernet connector and cat6 cable run into the house was installed, with new chimney straps on a new chimney. Hopefully, will be good for another 8 years. Meanwhile, the Ballroom roof is still having trouble. I was hopeful that tonights reinstallation would provide a path back to the gateway for Ballroom, but since we our aim improved to the desired target (Commons), it got worse for Ballroom. So, the Ballroom roof is still needing some kind of solution. --RussellSenior
December 30, 2013: There was another outage yesterday (until about 9am this morning). A few weeks ago, I had installed a UPS on the wall to try to isolate our gear from any building power issues. This time I replaced the Alix2 and all of the power supplies. The NanoStation2 is on a surge-suppression-only power outlet, the others should have some battery backup, possibly for a long time due to the small load we are placing on them. I also replaced the silly wooden shelf device we had there with a simple board on which the Alix2 is mounted using one of our newly acquired wallmount kits, and the three PoE injectors. They are labelled as well. We'll continue to watch this network for misbehavior. --RussellSenior
December 6, 2013: There was an unexplained outage of silt, the alix2 gateway device, at about 4:30pm. I noticed about two hours later, managed to get into the building and check on it. I attached a serial console, got no response, and rebooted it. Came back up fine. No idea why, which is annoying, since it could happen again. I am going to guess it was a power blip of some kind. The electricity there has lots of motor loads for their HVAC equipment. We lost an alix there this year when it stopped talking to its CF card, maybe also related to power. --RussellSenior
January 29, 2012: Replaced an access point and eventually (today) a wall-wart power supply, the latter fixing the problem, in the laundryroom at the Mississippi Ballroom building. This was a peculiar case, the access point was probably just fine, but for some reason it was not able to supply enough power to *received* data. It came up fine without apparent errors, it beaconed so that clients could see the network, however it was not possible to associate with it. Replacing the power supply fixed the problem. Thanks to ColinFrey for reporting the problem. --RussellSenior
June 28, 2011: We are seeing packet loss on the point-to-point link between the Ubiquiti PowerStation2's on Mississippi Commons and the Grand Central Baking roof. Both ends report (via the LED indicators) solid signal strength. It isn't clear what is causing the packet loss. There is heavy tree folliage in the path. We (DanRasmussen and RussellSenior) noticed that the mast on Ed Han's roof had slipped a bit. It looks like the upper two courses of bricks have moved. This would be consistent with a sudden change in signal strength that we saw back in February. Ed has been alerted. --RussellSenior
May 30, 2011: After good experience elsewhere, reflashed the 4 remaining metrix boxes with OpenWrt r27000 and batman-adv, replacing the old Metrix Pyramid + WDS configuration we flashed on them in June of 2007. Thanks Metrix Pyramid. Long live Openwrt + batman-adv! --RussellSenior
April 2, 2010: A power-outage induced downtime. Sometime about 1 p.m., a power outage caused the devices at the bakery (including the gateway device) and commons to drop. The devices on the bakery roof recovered, but apparently, as a result of this power interruption, our metrix on the commons roof got stuck. This was particularly bad because it acts as the primary relay point for the roof-to-roof network. This outage was not noticed for several hours, until late evening. I drove over about 11:30 p.m., got into the building and the telephone closet and power-cycled the metrix, after which the network returned to normal function. --RussellSenior
January 28, 2010: Plugged in two new indoor access points in the Mississippi Ballrooom building, one in the laundry room and the other in the retail space downstairs. Both are Linksys WRT54Gv2's with OpenWrt 8.09.2. I have bridged all the ethernet ports together so that it doesn't matter which is plugged in. The ethernet for downstairs is run on two unused pairs left in the light-blue cat5 run down there. This was to solve the problem of people inside the Ballroom building having not such good connections, when a bunch of gear was on their roof and not helping them out so much. I gave the indoor APs unique SSIDs (www.personaltelco.net/street and www.personaltelco.net/laundry) to help out with users figuring out what they are connected to. Both are on channel 6 for the time being. We should do a more comprehensive site survey to maybe optimize channel utilization. --RussellSenior
August 11, 2009: RussellSenior met Shawn from Stephouse today at the Grand Central Baking building and installed a CPE to connect to Stephouse Wireless for an internet connection, replacing the Covad/Stephouse DSL that we've been using since the Mississippi Network launched way back in 2005. The old gateway nucab in the Ballroom building has been unplugged along with the DSL modem. The primary regression is that we no longer have multiple public IPs and that (temporarily, at least) we don't have port forwards set up to reach the network directly (we have to go in through the VPN). The port forwards should be set up soon by Stephouse. Some other cleanup is needed as well. --RussellSenior
August 7, 2009: SaraHonsberger, RussellSenior, TylerBooth, JorenLove and ChristopherChen worked on improving the link to the bakery roof, so that the rest of the network can be served via a new wireless Internet connection through Stephouse. The old DSL is due to go away in the near future. The installation on the bakery roof was completely revamped. With a Ubiquity PowerStation2 (Stephouse) was installed to link through the trees to the Commons, and a Ubiquity NanoStation2 (Russell) was installed to provide local coverage. Our power had been cut off when the tenant downstairs whose HVAC unit we were sponging off of moved out and turned off their electricity, so we moved the power injection inside to the warehouse space near the bottom of the upper ladder. In the near future, Tyler will provide a CPE to connect to their network and we will install an AlixCab to act as a gateway. On the Commons roof, we revitalized the second cat5 run to the roof, installed the proper PoE injector, and installed a PowerStation2 (Stephouse) to link to the bakery roof. With rough aiming, we achieved 4 bars and a pretty solid connection of about 10Mbps. Retired to Amnesia to celebrate. --RussellSenior
November 16, 2008: Visited the house of EricMessersmith and retrieved the Netgear WGT634U there that had been serving as the access point for the repeater there. Eric's roof-mounted omni directional antenna connects to the outdoor 802.11g network by way of a PTP-owned Buffalo WHR-G54S in client mode. The Netgear connects to the Buffalo and provides local access. Upon examination back on the bench, it appears the Netgear was physically and electrically sound, but that (guessing) someone had made the mistake of plugging the ethernet cable into the WAN port (we frequently want to use the LAN ports in cases like this so that they bridge rather than route). I took the opportunity to update the version of OpenWrt flashed on the Netgear, bridged eth0.1, eth0.0 and ath0 together, so that it won't matter which port the ethernet is connected to. Also set up dnsmasq on the Buffalo to hand the Netgear a particular address (192.168.3.2 or 192.168.3.3, depending on which mac address the bridge gets) using /etc/ethers. The Netgear is configured to get an address from DHCP. It is buttoned up and ready to plug back in, hopefully in the next day or two. --RussellSenior
October 22, 2008: Sometime about 2:30 a.m. today the uninterruptible power supply in the laundryroom of the Ballroom Building interrupted itself. When I got here, a little after noon, the SmartUPS600 was not responding to applied permutations. As a result, I temporarily bypassed the UPS and went to find another one. A bit after 2pm, I returned from Office Depot with a replacement, an APC Backup 900. It is now plugged in and working, as far as I can tell. Hopefully, this one will give us years of good service as the previous one did. --RussellSenior
October 17, 2008: At about 8pm this evening, someone disconnected several of the ethernet cables from the switch in the laundryroom of the Ballroom Building, knocking most of the network off the air. I plugged them back in and left a note asking that people please not disturb the network and to please contact me in the event of trouble. Also, that bittorrent and other p2p applications on the network were not appropriate, because they interfere with the ability for others to use the network. When found to be interfering, the party involved will be blocked until we can discuss the situation. --RussellSenior
October 8, 2008: Visited the roof of the Grand Central Baking building to reset the metrix and also to retrieve the unused WiMax CPE and antenna that belongs to StephouseNetworks. Noticed that there is now an avenue for penetrating the roof. We could, if we wished, remove the enclosure for our PoE switch and move it inside near an AC outlet. This could allow us to provide a better access point for indoor coverage as well. We should perhaps suggest that to the building owners. --RussellSenior
March 13, 2008: Replaced the fourth WgtRepeater with a Ruckus DZ today at Moloko Plus. Previously replaced WgtRepeater devices in the last month or so were at Amnesia, Muddy's and a residential location northeast of the Ballroom building. Should be a substantial improvement at Moloko Plus. --RussellSenior
January 31, 2008: Today, we received and I deposited the restitution funds ordered by the Multnomah County Circuit Court in the case of the MissNet vandalism. It was not particularly expected. I had just been wondering if we would ever see a dime of the money, and found the registered letter 2 minutes later in our post office box. --RussellSenior
January 30, 2008: MichaelWeinberg and RussellSenior visited Mississippi Commons to correct a problem with the ethernet run between the rooftop metrix and the wiring closet. We had power just fine but according to graphs, about mid-December the network connectivity to the wiring closet went away. We found that the Soekris board was fine, but we had an issue with the cable run. We eventually tracked the problem back to a mid-run coupling (keystone to tip connection) near the conduit inside the wiring closet. We fixed that up and everything started working again. In the process, we discovered that the access point in Salty's Dog Shop had disappeared. Michael left a note inquiring about that. Also recently we have upgraded two of the wgt repeaters to Ruckus DZ devices, one northeast of the Ballroom and the other in Muddy's Coffee. It is possible that we now have coverage in Mississippi Pizza, though that is unconfirmed at the moment. --RussellSenior
November 10, 2007: On an unexpectedly nice day, RussellSenior, SaraHonsberger and PaulCLeddy got on the GrandCentralBaking roof and fitted our mast there with some guywires to reduce the swaying and swaying-induced "thunking". Hopefully, that will mitigate the noise the tenants downstairs have been experiencing. Should check in with them in a few weeks to confirm. Paul and Russell also climbed on the Commons roof and Paul got briefed on the gear we have there. We also bumped into someone at MississippiPizza that informed us that the BoiseVoice article had run. We hope to: a) get a copy; and b) begin receiving feedback from residents. --RussellSenior
October 26, 2007: Goodness, this has been neglected for a while. After months of occasional DSL outages, we swapped in a new DSL modem to test whether that is the problem. It is a Netopia 3000 or something, apparently the same device as used in NodeFreshPot where it works fine. We'll see if that helps. --RussellSenior
June 19, 2007: A DSL outage occurred yesterday afternoon at about 2pm. At around 10pm, the outage was noticed and the default route was diverted through NodeFreshPot. Last night I tried power cycling the DSL modem, which did not seem to help significantly. Today, the DSL provider was contacted and the problem corrected. At about 3:15 pm, I power cycled the DSL modem, which has returned the DSL circuit to function. I reverted the default route and things should be back to normal. --RussellSenior
June 6, 2007: RussellSenior, CalebPhillips, StefanMintier, MichaelWeinberg and TroyJaqua converged for a work party this evening. Russell had been working on an updated firmware for the metrixes, based on Metrix Pyramid (1.0b5), with better support for link performance monitoring, and proceeded to reflash the 5 metrixes successfully. Troy fixed some minor config glitches discovered on both of the Ciscos on the ballroom roof, see CiscoConfigNotes. Caleb, Stefan and Michael climbed on the Ballroom roof and repointed the cisco-ballroom-nw antenna a little bit counter-clockwise, so as to take in a bit more along Mississippi (enough to hit one of our WgtRepeaters recently deployed there). Except for Troy, we got on the Commons roof to perform the metrix update and also to check the condition of the gear (looked good). The metrixes are returning helpful information about the state of the backhaul network and soon we hope to have our performance monitoring software fully functional again. --RussellSenior
January 9th, 2007: RussellSenior and I visited the GrandCentralBaking roof to investigate reported noise generated by the sled. We were unable to reproduce the noise, but talked to the building tenents who mostly seemed upset that they were left out of the loop. We agreed to add a couple more cinder blocks to the sled, and we assured them that our gear only consumes 12 watts of power - a negligable amount. We also decided to move the gear 3 feet to the left (facing south) so that it isn't visible from a nearby skylight. We will have to come back later to add the cinderblocks. The movement seems to have somewhat injured the SW link - it is working most of the time. However, the N facing link is unaffected. -- CalebPhillips
December 9th, 2006: RussellSenior returns to the GrandCentralBaking roof, installs the PoE-injecting switch (removing the mid-span PoE injectors), re-establishes contact with the metrix, fixes /etc/network/interfaces ("bridge_ports none" is what I wanted, not commenting out the line), moves the 27 dBi antenna lower on the mast where it can be aimed better (and independently from the CPE), and generally zipties the bejesus out of the loose wires. Got a 40 dB average to Commons from the backhaul-stat.sh script while aiming. Need to figure out how we are going to configure the network to use the wireless link to Pittock.
December 8th, 2006: TylerBooth, MichaelWeinberg and RussellSenior get on the GrandCentralBakery roof and install the Stephouse CPE device for the link to the Pittock Building, as well as a 5-foot extension to the mast. To connect to metrix-bakery, it is necessary to take eth0 out of the metrix's bridge. Russell guessed at the appropriate change to the /etc/network/interfaces br0 stanza and was wrong. A reboot brought the metrix backup without a configured bridge and therefore contact was lost. Will fix this ASAP.
November 26th, 2006: A power outage from about 6pm until shortly before 9pm knocked our network offline. The outage evidently did not extend as far north as Ed's, nor as far west as Missouri Avenue, as our gear there remained powered. However, the outage, particularly at the Commons building disrupted our backhaul network and therefore users that were able to associate to our gear were not able to get out to the internet. The UPS in the ballroom building did keep our gear there (including the DSL) working for nearly an hour after the outage began.
November 21st, 2006: This afternoon, RussellSenior and I installed gear on GrandCentralBaking on Fremont, expanding the network to its furthest point southward. The metrix we installed connects to the Commons rooftop using 802.11a for backhaul, and it covers the Bakery and surrounding neighborhood with an 802.11b/g Omni-directional antenna. Simple tests show that the network is usable in the GrandCentralBakery Cafe (the Bakery asked for this). The Bakery gives us an excellent vantage from which to make other connections - it is a great addition to the network. -- CalebPhillips
November 16th, 2006: This morning was the sentencing hearing in the case stemming from the vandalism on August 4th. Judge Amiton ordered the defendant, who had pled guilting in late October, to pay restitution in the amount of $3,379.60. The defendant was also sentenced to 24 months probation and 80 hours community service. -- RussellSenior
October 30th, 2006: This evening, the network sustained its second DSL outage in a three days. The first occurred Saturday evening, at around 8pm. I noticed it at about 2 in the morning and rerouted traffic out the FreshPot's DSL. Sunday morning at about 11am, I got over and power-cycled the DSL modem, which seemed to fix it. This evening, around 4pm, the DSL link went out again. I arrived around 11pm and fixed it, again with a power-cycle. Maybe the DSL modem is getting tired?
Also took the opportunity to check on the Amnesia repeater, which had disappeared a few weeks ago from the upstream side. It seems that the USB radio had died. I replaced it with one I was carrying, patched up the /etc/dhcp/dhcpd.conf to hand it its repeater IP (10.11.105.10) so I can find it easily, and it worked again. --Russell Senior
October 13th, 2006: RussellSenior, TylerBooth, MichaelWeinberg, myself, and a couple others converged on the roof of GrandCentralBaking on Fremont to decide if (a) We could connect to metrix-commons from there, and (b) to see if we could connect to StepHouseNetworks in the PittockBuilding. Mike and Tyler forgot some component necessary to to the pittock test, so we punted on that one. The commons test was largely successful (despite Russell's skepticism), showing a SNR around 20 and < 1% packet loss in a flood ping. We decided to come back at a later date to erect the node, it will require the purchase of a 27 dBi 802.11a parabolic grid and a 802.11b 120-degree sector. Also, we looked at the view to SelfEnhancementInc - this looks like a good next hop and would allow us to connect back to the mississippi ballroom for link redundancy. -- CalebPhillips
August 8, 2006: RussellSenior and CalebPhillips visited the Ballroom Building in the evening and try replacing the backfire antenna pointing at Commons and its jumper with the higher gain 27 dBi dish antenna. Russell even worked up the nerve to climb onto the roof! We moved the sled few feet to the north to get a better angle past the powerline transformers and clamped on the new antenna. While dismantling the old LMR-195 jumper, I noticed that the N connector wasn't tight on the metrix. Perhaps that explains our lossy antenna feed. Initial results on the new hardware showed link SNR above 40. If this fails to fix it, then it must be something internal to the metrix.
August 7, 2006: RussellSenior and MichaelWeinberg visit the Ballroom Building and checkout the inside of the metrix, reseated the miniPCI radio, checked the u.fl connector on the radio, which looked secure. Also reset the cisco-ballroom-nw to default and then configured it to be 10.11.104.3 (what we had been using for that corner) and to use the traditional SSID. Sadly, later than night, our experiment with the metrix proved unsuccessful as we lost the link again at about 10:30 pm.
August 6, 2006: At about 7:30pm yesterday, a couple hours after we left, we lost our link from metrix-ballroom (previously called metrix-naya-sw) to metrix-commons. This had the effect of knocking everything but the ballroom building offline. Sigh. RussellSenior called CalebPhillips and MichaelWeinberg and arranged another visit for this morning. The most likely suspect is the wobbly tripod. This is one with a small diameter (and short) mast, that would not accomodate the mast-mating hardware we had used elsewhere on the ballroom roof, and so the larger diameter upper mast was simply slipped over the smaller diameter tripod mast stub. This allowed the mast to wobble. The theory is, a gust of wind moved the upper mast and our directional beam stopped hitting the commons antenna.
The link mysteriously came back at about 7am this morning (another gust of wind??), but we wanted to secure the mast to prevent it happening again (assuming that was the problem).
RussellSenior stopped by A-Boy and bought a possible replacement mast and 4 3-inch diameter hose-clamps. We arrived at about 10:30 am. Michael and Caleb climbed on the roof and Russell monitored things from the laundryroom. Michael and Caleb cleverly tried the simplest solution first, attaching the upper mast to the side of the tripod mast stub with the hose-clamps, and it worked. Michael and Caleb adjusted the direction of the backfire antenna until we maximized the rssi from commons (averages about 28.5 dB as reported by the driver, bouncing between about 23 and 31 or so). Hopefully that has fixed the problem for now.
August 5, 2006: Usual suspects (see August 4 entry) show up. First task is to get the ed-to-commons link working. We replaced the chimney brackets, replaced the backhaul antenna, swapped antenna/radio ordering to match all the others on the network, rewrapped the omni antenna conector. Caleb gave it a rough eye-ball aim, and got 35 dB SNR (as reported by the driver). Good enough. It passes traffic fine.
Then we moved on to the second project, figure out why no one is associating with the local coverage radio on naya-sw. Turns out, the LMR-195 antenna jumper took the brunt of the fall off the roof and was severed. We swapped the antenna from naya-nw and everything was good. Since we don't need the north pointing backfire antenna, we have decommissioned naya-nw and replaced it with a cisco. We put together the second 27 dBi antenna and tried it on the naya-to-commons link, but possibly due to the wobbly tripod, couldn't aim it in such a way to get any better snr to commons, so partly to take advantage of the larger beam-width, we switched back to the backfire antenna at naya-sw. We had talked about moving the metrix to the nw corner, but Caleb indicates the line-of-sight seems indistinguishable from either corner and leaving it where it is has the advantage of not requiring the moving of large numbers of cinderblocks.
August 4, 2006: Early this morning, the network was subjected to some vandalism that knocked it offline. RussellSenior, DonPark, MichaelWeinberg and CalebPhillips responded to the outage. Service restored at about 4:15pm. Downtime about 13 hours.
July 23, 2006: CalebPhillips and RussellSenior beat the heat, got on Commons for about an hour, between about 9 and 10 am today. We remounted the omni antennas in a bracket fabricated by Howard Barney and his helper Jake of Bayhouse, Inc (N. Lombard), in which the omni antennas are positioned at equal height, 18 inches apart. This seemed to improve our signal to naya significantly. Discovered why we should be wrapping the tackytape with electrical tape. Some of the tackytape on Commons at the metrix end of the jumpers had drooped (probably from the heat) and had holes showing metal parts of the connector underneath. Wrapped with electrical tape, somewhat badly. In a perfect world, we'd start over from scratch.
Because the antennas were dismounted for a while, the connection to naya for commons and metrix-west was out during this maintenance activity (roughly 8:45-10:00 am).
Next task is installing the higher gain antenna, improved chimney straps, and swap around antenna jumpers on Ed's. Should have gear lined up by the end of the week.
July 15, 2006: Participated in the MississippiStreetFair2006. Made some good contacts with folks.
July 9, 2006: Early this morning (shortly after midnight), RussellSenior loaded the updated drivers on metrix-west, metrix-commons and metrix-naya-sw, completing the transition to the new kernel/drivers. This will provide the capabilities to support the desired reconfiguration of the network planned for later today. Later on, RussellSenior, CalebPhillips, TamarackBirchWheeles, AlexisTurner, DonPark, and MichaelWeinberg tinkered with the network. First we played with metrix-ed to try to get it to connect to commons. That didn't work, so we went and got a 10' antenna mast to make the commons gear higher (needed for altura install anyway). After this was done, we tried repointing metrix-ed again. This time we had some feedback on the link-quality, but couldn't get past a SNR of 19 dB. This wasn't good enough to pass packets with any reliability. Hence, we punted. After looking at the gear on Ed's roof, CalebPhillips resolved that we need to replace the crummy radio-shack mount with a ChannelMaster mount (as used at NodeHollywood). Next work party will aim to install Altura and finalize the logistics for the GrandCentralBaking install. We will put off further work/debugging on metrix-ed until either (a) the gear falls off the chimney or (b) we get the network built out a little more. -- CalebPhillips
July 7, 2006: Tonight, between 10 and 10:30pm, RussellSenior loaded the updated drivers to metrix-ed and metrix-naya-nw. Both metrixes came back up smoothly. We'll watch them for a day or two and see how it goes. If there isn't any significant regression in behavior, we'll update the other three.
July 1, 2006: RussellSenior has finally gotten around to working on an updated madwifi-ng driver for the metrixes. This version has the advantage of providing link quality information on the WDS links (the current one does not), which will let us aim antennas more easily. It also holds the promise of possibly not kernel panic'ing as often. Russell has built a new image and is testing it out (he's actually typing over it right now) on a test network at his house. If it continues to work smoothly, we'll try deploying it in the next couple days. One curiosity is that Russell saw kernel panics bringing up the wds links on a soekris 4526 board (metrix-mark-i), but not on the soekris 4826 boards (metrix-mark-ii). He compiled the kernel for 486, which ought to work on both boards. Dunno.
June 30, 2006: Today RussellSenior and I payed a visit to Mississippi. We set out to re-point metrix-ed so that it could hook into metrix-commons (like metrix-west does). This required scary roof climbing, but was mostly successful. The new orientation allows us to get a reasonably good signal from both metrix-commons and metrix-naya-nw. However, we were unable to get WDS working through commons. We resolved that the firmware upgrade may fix this, or at least make it easier to debug, so we left metrix-ed connected to metrix-naya-nw (as it was before) and punted. We will try again after the metrix firmware upgrades, but hopefully it won't require any more climbing on that god-forsaken roof. --CalebPhillips
June 22, 2006: A work party consisting of CalebPhillips, MichaelWeinberg and RussellSenior visited Mississippi today. The Cisco in the SE corner of the Mississippi Ballroom building was re-energized for the first time in ages. Since last summer-ish. In order to expedite, a section of cat5 was attached with a rj45 coupler, the connection was then wrapped with tacky-tape and then electrical tape. The connection was slightly flakey, but was on and stable when we left. Next, we visited Quirks and Quandries and installed another repeater (third deployed so far on the network). Funds generated by this purchase can go towards several new netgear wgt634u's. Current repeaters are: 10.11.105.7 on Michigan behind Commons; 10.11.105.10 in Amnesia; and 10.11.105.14 in Quirks and Quandries.
May 28, 2006: I modified the DHCP configuration to allocate 15 IPs from 10.11.105.1 to 10.11.105.15 to the MississippiNetworkRepeater devices based on their USB radio MAC address. Only a couple of them are allocated at the moment, for those deployed or about to be deployed.
May 26, 2006: Metrix-commons had an outage from about 5pm until 7pm, where it stopped passing traffic on its backhaul network, cutting it and metrix-west off from the rest of the network and the internet. I stopped by Commons and power cycled the metrix. Traffic returned to normal thereafter. --RussellSenior
May 10, 2006: Metrix-west was offline for about 24 hours (starting Tuesday, a little before noon). Apparently due to a displaced power cord. Power reconnected. Tested, back online. --RussellSenior
April 9, 2006: The whole network was knocked offline this afternoon for about 2 hours (from a little after 2pm until about 4) due to a flakey ethernet connection in the NAYA wiring closet to "naya", our primary router. We could connect to it from the outside, but its connection to the entire local network was down until I could arrive on-site and jiggle it. This is a problem needing fixing. Flakey ethernet connections suck. --RussellSenior
April 2, 2006: The northern leg of the network was knocked offline this afternoon (from around 1pm until 9pm) due to a flakey ethernet connection in the NAYA wiring closet to metrix-naya-nw. Someone has plugged in an access point to the 8-port switch, perhaps they bumped our crappy crimp job. --RussellSenior
We are currently experimenting with NetgearWgt634u repeaters, using OpenWgt and a Hawking USB radio. They seem to have an annoying and thus far unexplained tendency to lose their upstream association after a while. I stopped by Amnesia to reboot its repeater this evening as well. --RussellSenior
March 28, 2006: The core of the network was down for a few hours in the afternoon, apparently due to a power outage in the area. The FreshPot network was also offline during this period.
Interestingly, I can't ping the edimax from either the FreshPot or Missnet side. I think (unconfirmed) this has to do with MAC cloning at the bridge. It is confusing, because I thought I remembered pinging it successfully before.
February 4, 2006: RussellSenior replaced the 5-port netgear switch he had loaned with an 8-port netgear switch that the project had purchased. This will allow us a place to jack in for testing after we reconnect the third AP on the roof. Reused the existing wall-wart transformer after confirming it was also a 7.5V 1A device.
January 24, 2006: RussellSenior visited the Center for Self Enhancement at N. Kerby (three blocks east of Mississippi) and Failing today, and was given a tour of the roof by Facilities Manager David Proby. See photos. There are some tree issues. We apparently can't get onto the white part of the roof, which slopes down to the east. A taller mast might compensate.
January 14, 2006: RussellSenior and I traveled to the Naya Building today to install a second server: "chevy". The addition of chevy will take some stress off naya, which was doing everything until now. We also took some time to organize and label the cables around the server "shelf". With the addition of zipties, staples, and a few custom-cut cat-5e cables, we conquered the madness. Finally, we took a trip over to metrix-west to test download speed at the far end of the network with the new load-balancing installed. The result: load balancing is awesome.--CalebPhillips
January 11, 2006: RussellSenior, CalebPhillips, TroyJaqua, BenjaminJencks and DavidJencks convened a WeeklyMeeting at NodeFreshPot at approximately 6:45 pm. At approximately closing time, we configured a serial console on the FreshPot nucab and shutdown. We opened the box, installed the ISA NIC, a 3Com 3c509B, and rebooted. The nucab came back up without the new interface. Modprobing 3c509 installed the kernel module driver and we had an eth2 interface. However, we wanted to use the ISA card to connect to the DSL circuit (it is a slower card), so we inserted 3c509 into /etc/modules and rebooted. From the serial console (ttyS0,19200,N81) we could see that the ISA card came up at eth0, so we rearranged the cat5 so that eth0 remained the DSL; eth1 remained the local FreshPot AP; and eth2 attached to the edimax. We added a corresponding stanza to /etc/network/interfaces for eth2 and gave it an IP of 10.11.104.20 (when we decommission metrix-naya-sw, we'll give the FreshPot eth2 10.11.104.2 instead, in order to keep the gateways together in the IP space).
After packing up at FreshPot, we headed over to NAYA to see if we could get load balancing working.
TroyJaqua worked on configuring a Linksys WRT54GS and an edimax in WDS repeater mode, and did so successfully. This may be a solution for our businesses looking for a booster at their locations. The downside to this is it requires a static wds link to be configured on the upstream radios. The upside is that it works. Troy added a WDS link to metrix-naya-nw's b/g radio to connect to his Linksys, then connected an edimax to the Linksys and then several people (Troy, Ben and David) associated their laptops to the edimax and got dhcp resolution and connected to the rest of the world.
Made some progress on load-balancing, and adjourned around 9pm. Ben is continuing to work on the load-balancing remotely.
January 10, 2006: TylerBooth, MichaelWeinberg, RussellSenior, BenjaminJencks and CalebPhillips worked at NodeFreshPot to install a wireless link to the Mississippi Network in order to facilitate load balancing. We found a suitable location for the edimax device, on top of the shelves behind the counter, and found a way to run ethernet from there to the nucab in the backroom, and power to the nearest practical outlet. We had a little difficulty getting a high quality crimp on the ethernet cable. This was attributed to the outdoor-rated cat5 being perhaps a bit thicker that typical, and perhaps not completely compatible with the tips available. We were making good progress when we discovered that the FreshPot nucab had only two PCI slots and thus would not accomodate another PCI NIC. Russell went home to find one, but failed. Packed up and went home.
January 7, 2006: RussellSenior, BenjaminJencks and DavidJencks spent some time today on Mississippi. First, NoCat was enabled on web ports (80 and 443). We tested from NodeFreshPot and it seems to function properly. It took a little hacking because of current DNS inadequacies (e.g. nodemisssissippi doesn't resolve). We spoke briefly with the NodeFreshPot counter people about the wireless there, and they referred us to the manager. Need to pursue that through Tyler, most likely.
We visited the BlackRoseCollective, just north of NAYA across the small community park and spoke to BHT. We told him what we had in mind, and he was okay with it. We plugged in an edimax to test the signal to naya-nw. They have no nucab there, so using the edimax there will be more challenging. His only expressed concern is that they have a house full of people and didn't want missnet traffic to impinge too much on their bandwidth.
We visited Commons and tried to connect to the cisco-commons from inside (it was pouring rain outside), but were unsuccessful. We could connect to metrix-commons. Given that no one appeared to be connecting, we decided to decommission it for the time being, possibly to live again on NAYA. Russell and Ben climbed up on the roof and removed the cisco and its sector antenna. We clipped off the ethernet and wrapped the end in tacky tape. Check with Russell for the old tip and we can be sure that we crimp the new tip appropriately.
Ben investigated the edimax and determined that it won't do routing in its normal client mode. We figured a way to use it in bridging at NodeFreshPot, where we can install another NIC in the nucab there, run ethernet to the edimax in the front of the store, and do the appropriate routing on the nucabs. It appeared that we couldn't get DHCP resolution through the edimax client-bridge, but that won't be an issue with FreshPot, as we can assign a static IP. Need to coordinate with StephouseNetworks on NodeFreshPot wiring.
We stopped in to check with the Dog Shop, and the woman at the counter reported they'd seen the splash screen. I told her we'd just enabled it and that they'd only see it once a day (if they stay connected). We don't have any cacti data on that edimax, so we don't know how heavily it is being used.
Earlier in the day, I'd worked with CalebPhillips on checking out whether the edimax repeater mode will work on the Mississippi Network. We had some partial success, but had trouble with DHCP. We have not thus far succeeded in getting DHCP resolution through the repeater. --RussellSenior
December 16, 2005: As of about 3:30pm, the southern branch of the Mississippi Network was converted to a WDS configuration, and simultaneously, the problematic metrix-west (N Missouri and Failing) began to function properly. RussellSenior visited the neighborhood and confirmed the ability to get DHCP resolution from metrix-west and was able to roam seamlessly to the nodes at Mississippi Commons and the NAYA building (Mississippi and Shaver). Will need to convert the northern branch as well now. Thanks for everyone's patience as we sorted through the problem. We are now poised to further grow the network with much less turmoil and delay.
At about 7:00pm, the northern branch of the Mississippi Network was also converted to the WDS configuration. This will allow better monitoring of performance (particularly seeing if people are connecting), and will allow us to transition Ed's roof from the metrix-naya-nw connection to the metrix-commons, assuming that is ultimately considered desirable.
December 12, 2005: RussellSenior has autogenerated /etc/network/interfaces for each of the metrixes for using a WDS configuration. On the test rig, he has been running a ping for the last 20 hours from one client to another (as described in the December 4 entry) and seem to have a consistent 3.4% ping loss rate. We are still seeing a kernel panic after ifdown/ifup'ing the interface (as described here), but believe the problem is tolerable since the metrixes are rebooting themselves on panic. The goal is to get the WDS configurations installed this week, possibly on Thursday.
December 9, 2005: Last night, MichaelWeinberg, JenSedell, and RussellSenior distributed flyers at the Mississippi Art Walk. We may have located another willing roof host at the furniture shop on Mississippi down near Fremont. Russell continues to work on a metrix configuration that will work reliably. Current status is that WDS is working, bridging works, slightly lossy, panics on ifdown/ifup, but at least it is rebooting itself and coming back up in good shape. Maybe an interim solution is just always rebooting to ifup interfaces.
December 4, 2005: I have had partial success using WDS bridging on a test bed consisting of two metrixes and a router/AP using the madwifi-ng drivers and a multiple VAP configuration. I am able to ping from a client-11g -> WDS-11a -> WDS-11a -> WDS-11a -> client-11b, which is essentially what wasn't working before. Pings aren't without a few dropped packets, but relatively few (~3%). The most significant problem now is that I am having trouble getting the backhaul radios to consistently come up in 11a mode. Perhaps some timing issue. Also, I've seem some oopses, not always fatal. I should probably sync everyone up to the latest rev of madwifi-nw. Anyway, hopeful news! With luck, this will get ironed out in the next few days and we'll be able to deploy it.
December 2, 2005: Became aware in the late afternoon that the nucab's DHCP server was not running. The connection was fine, but clients weren't getting configured, which, uh, reduced utility. AaronBaer patched up the deficiencies and as of about 3:40pm the DHCP server appears to be runnning again. We are talking about ways to facilitate more expeditious outage reports. --RussellSenior
December 1, 2005: Buick replaced with a nucab box. TroyJaqua and RussellSenior fixed a small bug consisting of a missing /etc/network/nat.sh script and it started working. Network functioning again. Modified ebtables on metrix-naya-sw to reflect the new gateway (substituting its mac address for buick's eth1).
November 30, 2005: Metrix-naya-sw hung at about 3:00am when I ran "athdebug +recv" in order to collect information on MAC addresses going through each node. It is not passing traffic, so metrix-commons and metrix-west are currently unreachable. So, metrix-naya-sw needs a power cycle as soon as possible. --RussellSenior
November 29, 2005: Buick got sick and was rebooted. In fact, it is still sick and will be replaced, hopefully on Wednesday evening, with a nucab, at least temporarily. We also power cycled the Edimax AP in the dog shop in Mississippi Commons. It appears to be functioning now.
November 21, 2005: RussellSenior built a freshened kernel (22.214.171.124) and madwifi-ng (rev 1329), installed them on metrix-naya-sw, metrix-commons, and metrix-west, and rebooted. The new madwifi-ng rev was built in a metrix-compatible chroot environment and so the madwifi-utils in /usr/local/bin are now linked properly. The other two metrixes, metrix-naya-nw and metrix-ed are still running the original 126.96.36.199-metrix kernel and the WDS-branch madwifi drivers from late July. It is possible to connect with essentially zero packet loss from buick to metrix-west if you simultaneously "ping -f 10.11.104.2" from buick. Metrix-west was apt-get upgraded that way.
November 20, 2005: RussellSenior thinks he's figured out what is going wrong. It is an effect caused by client-node to client-node when the traffic needs to pass through one of the client bridges. As mentioned earlier, when a client-node sends to a client-node, it sees the traffic twice, once when it is sends it and once (in promiscuous mode) when the master rebroadcasts it. When the traffic is passing from the other side of the bridge (say, from buick on eth0), and it sees the rebroadcast packet it just sent with a SRC MAC on ath0, the bridge is reassigning that MAC to the bridge port associated with ath0, not eth0. When packets return headed for that MAC, they get to the bridge and the bridge fails to deliver to the port where that MAC actually lives. Boom. This problem does not occur when communicating client-to-master (or master-to-client), because these packets are not rebroadcast. The problem doesn't occur when the communication is strictly client-to-client, because even though the client still sees the rebroadcast packet, the bridge is smart enough to know not to reassign local MAC addresses to a different port.
RussellSenior tested this model this morning by ping flooding from buick to metrix-commons (thus keeping metrix-naya-sw's bridge refreshed with where buick's MAC should properly live) while pinging the problematic metrix-west. Still some lossage, but far less than the usual 98%, only about 17%.
Now the question is, what is the solution? One temporary solution might be to use ebtables filtering to drop packets at metrix-naya-sw where buick's MAC shows up on ath0 as a SRC MAC. But there are other situations where we'll see the same phenomenon, e.g. 11b/g clients of the 11a client nodes. The real solution is to get the sending bridges to ignore the rebroadcasts altogether.
November 19, 2005: DonPark and RussellSenior climb up on Mississippi Commons and collect some more data. Russell collects some kismet data during some ping tests, but misconfiguration of his kismet rig reduces its utility to near zero, but a careful examination of the tcpdumps from the metrixes yields the insight needed to figure out what is going wrong. The failure in ping request/reply loop between metrix-west and buick always occurs in the delivery of packets from metrix-naya-sw to buick. ARP traffic gets delivered fine in both directions, but ICMP and other IP traffic disappears after it arrives at naya-sw on its way to buick... buick never sees it. why?
November 17, 2005: TroyJaqua and RussellSenior visited Cecily's to try to recover metrix-west from misconfigured network. Unfortunately, we were unable to connect via ethernet either, so at about 3pm we came back, equipped with a ladder generously loaned by a neighbor, and Troy climbed up and swapped out the metrix motherboard with one configured with Ben's firmware. This was deemed to be the most practical solution, given the difficulties of getting the serial cable onto the DB9 pins and logging in while balancing on the crest of the roof. The radios on metrix-west remain the same as before, just the motherboard and its flash and ethernet are changed. See updated MAC address in the table below. Also did some testing from metrix-west. One interesting result was that pinging from metrix-west to buick disrupted a ping from metrix-commons to buick. Still need to get onto the commons roof to collect some over-the-air packets. Hopefully tomorrow. Ebtables may be our salvation. We are getting closer, but still haven't cracked it yet. The 11b/g radio was put on essid notyet.personaltelco.net to indicate it isn't actually working yet. I said we'd switch back to www when it was active and working. Talked to a few residents that were enthusiastic to dump their $60/month broadband.
November 11, 2005: I think I've figured out the "received packet with own address as source address" messages. They are only appearing on metrix-west and metrix-naya-sw. I think they are a consequence of having bridges on nodes in managed-mode. The master-mode node rebroadcasts frames sent via it, and bridging puts the interfaces in promiscuous mode, so the sender is hearing the rebroadcast. The messages are therefore, presumably, innocuous. --RussellSenior
RussellSenior, MichaelWeinberg, and I got together today and did some more discussion and testing regarding the problems at hand. This discussion continued into the evening on IRC. As of now, we have some unanswered questions the most pertanent being: what is naya-sw doing with packets coming from metrix-west headed to buick, and why isn't it doing the obvious thing...send them to buick? --CalebPhillips
November 10, 2005: RussellSenior rebooted metrix-commons and metrix-naya-sw to the new kernel, and magically, traffic started to flow between metrix-west and metrix-naya-sw for the first time. However, oddly, connectivity from buick (10.11.104.1) to metrix-west was still severely lossy. Log messages "received packet with own address as source address" are appearing on metrix-naya-sw's /var/log/messages. Some progress, but some bugs still need straightening out.
November 9, 2005: RussellSenior got into the basement and was able to recover metrix-west via ethernet. The problem had to do with modules not loading. Patched that problem in a somewhat kludgy way by adding "pre-up modprobe ath-pci" to the athN stanzas in /etc/network/interfaces. The ath-pci module should have loaded from /etc/modules, but wasn't for some reason. Applied the same fix to metrix-commons and metrix-naya-sw, but haven't rebooted them. Can ping metrix-commons from metrix-west, but not all the way to metrix-naya-sw. Hoping a reboot to the new kernel will correct that.
November 8, 2005: RussellSenior has copied a new kernel, modules and utilities for use with the madwifi-ng drivers over to metrix-west, metrix-commons, and metrix-naya-sw. The /boot/grub/menu.lst file is modified but still pointing at the 188.8.131.52-metrix kernel. Except in the case of metrix-west, which because it was already not connected, we decided to use it as a test case. It rebooted to 2.6.14-metrix (with the madwifi-ng drivers) and is associated with metrix-commons with a nice strong signal on 802.11a, but for some reason its network is not functioning. Same thing with the 802.11b/g radio, association and a nice strong signal from the street, but no network. It isn't pingable from either radio. Going to try to get inside to test from the ethernet tomorrow.
November 7, 2005: RussellSenior is hacking on a metrix image with a new kernel and madwifi-ng drivers, using the metrix we pulled off of Cecily's as a testbed.
- I built a serial-console cable for the metrixes. It consists of a standard serial cable with one end cut off and spliced with three small wires with female connectors on the tips to slip over the male DB9 pins. It is slightly tricky to install, requiring tweezers, some light and a little persistance, but it beats the hell out of disassembling the thing to get at the serial port. The communications parameters for talking to the metrix console are 19200 baud, N81, no flow control. Using this console cable, the three wires are placed as follows (taking care that bending loads on the wires don't cause the conductive parts to touch... probably should insulate the connectors better with some heat-shrink tubing):
- pin 2 - blue
- pin 3 - white
- pin 5 - black
- Observed that wds clients can't associate with non-wds masters, but any client can associate with wds masters.
- I apt-get update ; apt-get upgrade'd the metrix, which installed about half a dozen new versions of things, but not too overwhelming.
- I also apt-get install'd tcpdump and ntpdate.
- I have compiled a 2.6.14 kernel and current svn madwifi-ng drivers and loaded them onto the metrix with rsync. It still can't see my 802.11a AP, presumably because I don't have the madwifi-ng drivers on the AP yet. Grub is not configured to boot the new kernel, so the metrix falls back to the old kernel without manual intervention on the serial console.
- The eth0 interface isn't coming up on boot. But /etc/network/interfaces is a mess right now, so maybe no surprise! An ifup eth0 cures it, but you obviously need a console to do that.
- The madwifi-ng drivers employ a new method of defining interfaces. Physical devices are named wifi0, wifi1, ... wifiN. Interfaces are created with the wlanconfig command, e.g.: "wlanconfig ath0 create wlandev wifi0 wlanmode ap". The associated utilities are installed in /usr/local/bin. Some /etc/network/interfaces changes need thinking through.
October 28, 2005: CalebPhillips, RussellSenior, and MichaelWeinberg replaced the metrix on Cecily's rooftop and re-attached the equipment with real chimney mounting hardware. This node seems to be working just fine now, with good connectivity to Commons. However, currently there seems to be a problem with the switch at Naya that is preventing the network from handing out DHCP or access to the intarweb. The todo list below has the current todo.
Russell and I came back to fix the switch issue and found Buick DOA. We also replaced the switch with one Russell had on hand. Everything seems to work now...except, Cecily's cannot connect to anything past commons in the direction of naya. Specifically, it seems like clients of the commons 802.11a radio (with the omni) cannot see each other (naya-sw and metrix-west are both clients to commons). We are working on an explanation. At this point the network is entirely functional everywhere except metrix-west. - CalebPhillips
October 27, 2005: CalebPhillips and RussellSenior managed to get Ed's roof online. Without access to a ladder we had only one option to make progress, and that was to see if the apparently non-functioning metrix on Ed's roof was actually powered on and accessible from the ethernet. Ed graciously let us in to check. This possibility was suggested by Russell's experience with metrix-naya-sw, where the radios did not initially come up after a reboot. Turns out, the metrix was on. Russell was able to connect via the ethernet, and got a weak signal radio signal on 11g, roughly 7 dB SNR. So the problem wasn't a bad POE connection and it was not a failure to load ath_pci either. Caleb suggested that we might have the antennas backwards. Twice, trying to "ifdown ath0" and the "ifdown ath1" froze the metrix. Russell tried swapping ath0 and ath1 in /etc/network/interfaces, rebooted and bingo, 11b/g started working! SNR in the attic jumped to about 30. However, the 11a backhaul was weak. Pinging to Commons worked, but with about a 40% packet loss. Maybe need to repoint the antenna (isn't there a distance tweak for 11a having to do with an ACK timeout or something, but I thought it was for further than we're talking about here). Caleb and Russell retreated to FreshPot to report success and think. Russell, looking out the window at the backfire already pointing at Ed's from NAYA NW, realized there was a chance that Ed's radio might be able to hit NAYA NW, even though it wasn't pointed directly at it, because it was only 600 or so feet away instead of 1500 feet to Commons. Russell changed the metrix-naya-nw ath0 radio to ESSID backhaul-nw and master mode on channel 161 and turned it on, then walked up the block near Ed's, logged in via 802.11g and reconfigured its ath1 (connected to the 11a antenna) to backhaul-nw and rebooted. Bingo! SNR of about 30 dB. Kind of a chewing gum and bailing wire solution, but it is up and passing traffic. - RussellSenior