Transmitter-Reciever Arduino Connection

In continuing my plan on using the Arbotix to be the base of the aerial platform, I was ready to test the translation of pulse signals from the transmitter through arduino to the servo, and ultimately the ESCs for motors.  The transmitter I am using is the FlySky CT6B, which comes with a 6 channel receiver ($35 US).  I programmed the Arbotix (Arduino) with a sketch that uses the Servo library.  Basically, it takes the pulse signals from the receiver and using library, outputs to servo the intended angle (which is translated into PPM or PWM, depending on which blog you follow, I just like to say ‘pulses’).

#include <Servo.h>
int ch3in; // Throttle
int ch4in;  //Rudder, yaw
int ch3out;
float ch4out;
int ch3min;
int ch3max;
int ch4min;
int ch4max;
float ch4smth; //this is the smoothed variable
int ch4hold;  // in case it is needed to refer to later
Servo outS4;  //variable for ch4 out to servo
void setup() {
  pinMode(3, INPUT); // digital input throttle
  pinMode(4, INPUT);  //digital input rudder
  pinMode(17, OUTPUT); // not sure if this works as analog 7
  pinMode(7, OUTPUT);  // for sure works with servo library
  outS4.attach(7);
  ch4min = 1500;
  ch4max = 1500;
  ch3out = 0;
  ch4out = 90;
  ch4smth = ch4out;
  Serial.begin(19200); // little faster than 9600
}
void loop() {
  ch3in = pulseIn(3, HIGH, 25000); // Read pulse
  ch4in = pulseIn(4, HIGH, 25000); // ditto
  //calcs for out
  if (ch4in < ch4min && ch4in > 900)
   {
     ch4min = ch4in;
   }�
  if (ch4in > ch4max)
   {
     ch4max = ch4in;
   }
  ch4out = map((ch4in/10),(ch4min/10),(ch4max/10),83,109);
  ch4out = constrain(ch4out,83,109);
  //smooth here
  ch4smth = (ch4smth + ch4out)/2;
  ch4hold = ch4smth;
  outS4.write(ch4smth);
  delay(6);
�
  Serial.print("  Channel 4 out:");
  Serial.println(ch4smth);
�
}

That is the test sketch.  There is a little bit of smoothing in there and  it does seem less jerky than using basic servo library example.  The response is pretty good and I just have to calibrate the angles and the physical location of the servo horn, and that should be good for the Yaw/rudder.  Much more complicated will be the integration of IMU(s) and control of the ESCs and motors.  Here is the filtering test below.

Posted under Remote Aerial

Remote Aerial Platform

I got a 100-sized helicopter (electric) recently and was going to use it to try aerial imaging and/or FPV after I got this awesome little dice cam that takes HD movies onto a microSD card.  This thing also has 4 infrared lights for any dark time videos.  It is essentially a 1 inch cube.  I got mine at HobbyPartz.  I then decided I want to try another platform and instead of going for a quad copter,  I thought I would try a tri-copter.  Just for goofs, I wanted to challenge myself and go for ducted fans for thrust.  I know they are not that efficient, but I wanted bladeless testing vehicle for close quarters at first.  I wanted to get the mixing and gyro integration experimented with first before deciding on the final platform.  The key for this project is light and less than a foot overall diameter.  First I am testing this platform out with 30mm ducted fans which are ridiculously light and the thrust on paper not bad for its size.  They are the AEO fans from China.

Crazy light balsa wood

 

For yaw control, I looked high and low for a good rotating connector and came up with a floppy drive stepper which has smooth rotation and a long spiral shaft which was perfect for jamming and gluing into center of balsa support.  Control drive would be a a small 9g servo.

Now comes the real crucial part of this vehicle – control and balance.  I don’t think I will be content with a 6 channel receiver hooked up to the electronic speed controllers, let alone the balance issue because it is not a quad copter.  I actually got an Ardupilot Legacy to incorporate into the design, and I probably should have gone with a newer mega version, but currently there was a 2-3 week wait unless I wanted a completely built one that had an IMU attached for considerably more money.  I got it and soldered all the headers in place and wired the jumpers on the reverse side for the extra channels.  So I end up with having to integrate an IMU into it somehow.  My current choices:  Arduino Nano 3.0, Ardupilot Legacy, and an Arbotix board.  The Arbotix has an ATMega644p and the other two have a 328p – not quite ‘mega’ but more than the 328p of the others on hand.  I decided to challenge myself and go the with Arbotix to start on this prototype.

As you can see above, this board has plenty of I/O and the bonus of an XBee port (essentially a shield) which opens up telemetry options and possible replacement of 2.4Ghz receiver altogether.

So the challenge will be incorporating either one IMU or multiple IMU devices.  With the Ardupilot I was planning on getting a GPS unit, but now I may just get a compass module which will work in concert with the gyros and accelerometers.  I’m not looking for waypoint capability or autopilot at this time, so stabilizing the pitch, roll, amd yaw will be sufficient.  My general plan for configuration goes something like this: Use the Arbotix as a base for the components, where the ESC cables plug into 3 analog headers and the final mixing and distribution takes place in the sketch after it comes from the 2.4Ghz receiver channels.  I will have a mini breadboard that will hold the IMUs and have output from them feed into the Arbotix.  Gyro and accelerometer feedback will be processed by ATMega and fed back out to ‘correct’ the ESC and motors.

Posted under Remote Aerial

Vue Rendercow persistence

After getting Vue 10 up and running, I wanted to employ some of my included 5-cow licenses.  On renders that take a while, using multiple workstations is the way to go.  In fact I installed Rendercow on my main workstation because I have 6 cores and at least one of them could be taken up by rendering in background.  The metrics are interesting on doing Rendercow renders.  It is not just the processor raw speed, but some other factors involved.  My guess is it has something to do with front side bus and cache on the processor itself.  Granted, my workstation has the newest hardware, but my wife’s PC has a faster processor in raw Ghz.  She has a 2-core with less cache on die, but mine is slower with more cores and a bigger cache.  My front-side buss is a bit faster too.  Using HyperVue for an external render from the start, my PC was able to accept and render 1/3 more frames.  The 3rd PC I was using was a laptop using wireless with a slower processor.  This rendered half as much as my wife’s PC.  I might add that both my wife and I have gigabit LAN ports, so that part of equation was the same.

Anyway, the point of this entry is the annoying ‘disconnected…’ status of one or more of the render nodes (cow clients), when Rendercow is clearly running.  In the case of my PC, where HyperVue resides, Rendercow would diable itself and disappear.  This would, at least, be after a rendering job when it was assembling all the frames.  This seemed random and didn’t happen all the time.  The laptop on wireless would be running and the Rendercow status window would say ‘rendering…’, while Hypervue would say ‘disconnected…’.  Obviously something was off.  The simple fact that I have come to find, is that turning off the scanning for Cows, or Auto Rendercow search was the first thing to do.  Then each PC should have a different port selected on the Rendercow setup window.  I went from 5005, 5006, 5007 on the PCs.  Then you add the nodes manually and use the ports you assigned on the Cow clients and then everything works smoothly.  I found in cases where the render stopped assembling the frames, that changing to manual mode enabled to pick up where it had left before.

I don’t know what I did without Cow nodes to render before.  What used to take 5 hours to render takes 3, now.  I am going to resurrect an old box and put a new motherboard and processor in it and keep it headless in the garage (where I have the network switch and router) just waiting for renders.  I’ll have to turn on the ‘wake-on-LAN’ feature so an upcoming render will wake it up to start the render.  I read a blog somewhere that talked about using a rendercow for different processors on one PC.  If the processor I get is fast enough and has enough cores (at least 4), then I am going to try that.  Now what we need is distributed processing for GIS for some of those statewide geodatabases I am currently dealing with!

Posted under 3D Modeling

WD Passport Concerns

What good is a backup drive if you can’t extract the data back off it?  There are many that are feeling that pain, I am finding out.  I actually haven’t lost anything because a backup drive has failed yet (internal drives have failed, of course).  I am now wondering about backups of backups!

This came to light because I have a WD Passport drive which started to behave strangely.  Not like failure (at least yet), but out-and-out super slow transfers.  This happened suddenly too – no hiccups or warnings beforehand.  This is my third WD portable drive (not replacements – 3 at the same time for various purposes).  My others are ‘My Book’ models.  This is not an isolated incident, as you can Google “WD Passport slow” and see this is widespread.  Apparently 320GB and 500Gb are most commonly reported (mine is 500GB).

At first I thought it was an issue with Windows 7, as I have Vista Business at work.  One of the ‘My Book’ drives works fine, so that is not the case.  Then I thought it was the front USB port, so I tried a nother cable connecting to rear USB port.  No change.  I ran checkdsk last night and it never completed, it was still running at about 27%.  Sure sounds like it has a bad spot on it, but no repeated popups like you get on a failing internal drive.  It just looks like it will take a long time.  I am now trying to transfer 16GB using my wife’s Vista PC and it says it is transferring at 124KB/sec but the ‘remaining’ calculation is blank.  So who knows how long.  The drive has never been out of my possession, either and it has never been dropped.

I have listened with my ear on the drive and did not hear the telltale clicking you might hear when some drives go bad.  None of the data is irreplaceable, as I have it on other drives.  It does open my eyes to possibility of precious photos and image archives being unretrievable if a portable drive fails.  This situation has always been possible – same held true in the days of backup tapes.  I am only pointing this out because of what seems to be a high incident rate with these passport drives.

I am going to try some disk recovery utilities and I may even break open the seal (these Passports look to be sealed in pretty good!) and try on another controller.

Posted under Management

Windows 7 Plunge

Plan of Attack

After my RAID/hard drive debacle, I figured it was time to start with a clean slate.  Usually with my Windows boxes over the years, I would start with a clean drive and reinstall/upgrade OS to clear out the bloat.  I would accumulate all kinds of apps and drivers that just cluttered up the joint.  This situation was no different.  I did find out something though…..it was more than one of my faulty drives in the RAID pair.  I bought Win 7 32-bit (mistake 1), and then installed on my system with same hardware (mistake 2).  My hardware was decent enough (4 core AMD, 4G RAM, ATI 4xxx series) and OS installed without a hitch.  However 2nd day after install I was repopulating my working drive folders with pictures, video, etc. and I thought I would play a video of gypsy band playing in a pub from our Czech Republik trip and I got the stuttering, frozen black screen, and loss of mouse cursor.  Essentially everything that happened right before that faulty drive got hosed (I was playing back a TiVo video from TiVo-to-go then).  OK, so my video card and/or motherboard became unstable.  The motherboard had the Japanese solid state capacitors and all looked OK (but that may be the issue as there is no visible place for them to bulge – they may just up and die without a visual clue).

Regroup

Time for a whole new system.  Might as well – I noticed that the motherboard had DDR2 RAM in it, so it had been a while since that was built.  So I sprung for everything (new case, 6 core CPU,  mboard, video card, RAM, blu ray combo player).  And correcting for mistake 1, I purchased Win 7 64-bit.  Price-wize, it was a teeny bit more to buy a new OEM Win 7 Pro, than to do an AnyTime upgrade.  The key point was you cannot do an ‘Anytime Ugrade’ to go from 32-bit to 64-bit system.  Nuff said, it was time to build and install.  Because I was going to have a dual-boot system, I needed to install Windows first and Ubuntu second.  Windows install was straightforward, but longer than the 32-bit install (I guess it had to have all of 64-bit and some of 32-bit to run in compatibility mode) I had done.  Once I got Win 7 up and running, wiped out one of my internal drives that had an old XP system on it (quick-formatted to NTFS).  Then I took my Ubuntu 11.10 live/install disk and rebooted with it.  because I have 3 drive (4 with USB external) with a bunch of partitions, it can get confusing with a Linux install, so I chose to cancel install which basically brings up the Live Ununtu.  Then I went into disk manager and took a gander at my drives and partitions.  I wanted to make sure I had the right drive (since it was the only 100G drive it was easy) and wrote the device name down to be sure.  Then I proceeded to the install app on the Live system.  I made sure not to choose “Install Along Side of” option and chose the manual config option.  There I chose the empty 100G device name and formatted 95% of it for the system mounted on ‘/’, and the rest formatted at Swap partition.  The install continued without issues.  After the reboot, because I don’t like ‘Unity’ interface, I installed the old Gnome interface.  Ubuntu 11.10, unlike first version of 11, stripped out Synaptic and ability to change your login screen out-of-the-box.  Once the Gnome retro interface was installed, logging off and on allowed the interface change.  However, the ‘System’ menu is gone on the top menu bar.  At some point I may be forced to adapt to Unity interface, but I’ll give it a go with ol’ Gnome for a bit.

 

Jury is Out

Well, I have set up a number of Win 7 boxes at the office – all new PCs and upgrades go to Win 7 Pro, but my work PC is still Vista Business.  Bottom line is I haven’t gotten too cozy with the OS just yet.  Now because I have it at home, I will have to get familiar with it, regardless.  Anyway, interesting Microsoft went back with ‘My Documents” like it used to be in XP days.  Look and feel issues are improvements over Vista – they got that right.  I do have one annoying issue (and I am not alone on this), and that is the power handling (sleep, hibernate, S1, S3, etc).  I never had any issues with Vista and waking up the system after sleep (I enabled wake on-mouse in bios).  In my brief exposure to Win 7 32-bit, the monitor went black after sleep and could not be repowered no matter what I did.  A cold reboot with power (4 second hold) was the only way to get it back.  The weird thing is the monitor power button was hijacked – pushing it did nothing – no orange light, no green light.  I chalked it up to the bogus video card and/or motherboard.  With a completely new system (nothing the same except the SATA system drive) the exact thing happens.  Like I mentioned, I am not the only one experiencing this.  It happens with ATI cards and NVidea, different bios(s), motherboards, etc. Quite prevalent – just Google ‘+windows 7 +monitor +blank +sleep’ and you’ll get plenty of hits.  The interesting thing is even after I put ‘Turn off display = never’ and ‘Put computer to sleep = never’, It still happened after an overnight update (default automatic install).  So what I have done as a work around is keep the power settings like the previous sentence and change ‘Update Settings’ to ‘Download, but install when I choose to’.  I also changed the bios power settings ‘wake’ triggers to S1 instead of S3.  I haven’t had a blank monitor issue for 4 days now, so I think that is behind me.  I did like it when I set my home PC to sleep after 5 hours of being idle and double-clicking mouse to turn back on.  That is how my wife’s Vista PC is and it is a great ‘green’ feature.  I will keep testing after future updates to see if it gets resolved.

The Positives

Besides the obvious issues with some legacy apps being unstable under 64-bit, I am happy with the performance of this new system.  After putting together, I ended up with an ‘Experience Score’ of 6, where my previous Vista system was 5.2.  The benefit I think will be the attractive part of the new system is the extended RAM access and the 6 cores.  This will be very handy in Blender (installed both in Win 7 and my 64-bit Ubuntu) and ArcGIS 10 (which can take advantage of the extra cores).

Posted under Operating Systems

Unthinkable (or at least Unwanted) happened!

A system drive failure (and the S.M.A.R.T. system gave no warning!) and it wasn’t wasn’t a simple drive.  It was a RAID 0  [that is a zero, but looks like small ‘o’] set.  Serious lesson learned – no system drives on RAID 0 again!

How it happened:  getting ready to go on trip to Prague/Vienna and wanted to load some recorded TiVo shows from TiVo-to-go onto a couple of SD cards.  I wanted to test how some transferred to the PC (Vista premium) and started playing (which opened up Media Player).  Things started to stutter and freeze, and the mouse cursor no longer worked.  Then the dreaded Blue Screen of Death.  It mentioned something about the video driver and the typical ‘try to restart after removing offending hardware’.  Not possible on my system as I do not have VGA built-in to the motherboard.  It was an ATI 4550 video card.  Rebooting did not work as it said missing ‘operating system’.  The only time this happened before was when I was using one of the many live Linux disks I have and the bios switched the SATA channel boot order.  I went into the bios and that was not it.  Apparently the stress of playing that video toasted a portion of one of my twin 500G Seagate drives.  The AMD RAID hardware driver/bios is so good that it noticed the missing chunk and would not load the striped set.  The second drive always showed up as ‘offline’.  I wish it was more forgiving!

I went into panic mode because my backup drive is only 600G and the data I had on the 1 Terabyte RAID pair was up to about 750G.  Since I wasn’t able to fit the data on the drive for months, I said to myself “I’ll get another bigger backup drive and then backup”.  I actually said that 3 weeks ago when my wife got a nasty redirect virus that rewrote much of her files (and she had Avira anti-virus installed).  Since I thought I was going to get another drive, I had since used that portable drive to put some work-related data on.  Whoops!  No backup.  Even though I am fanatical about backup at my regular job as an IT manager, I let the home front slip.  I even have an offsite backup of our office data remotely backing up to a terrabyte drive on my Ubuntu Server at home using rsync.

I tell this tale so maybe you can avoid such stress and panic.  Also, if it happens that my experience can help you get  all (or some) of your data back.

The first thing I did was use the AMD RAIDxpert application to see if I could see what was up and coax the failed drive back online.  I found that if I unplugged the power to the second drive, I could get it to respin and the tab that said ‘activate’ would light up.  I clicked on that tab and the drive showed ‘functional’ for a few seconds.  This wasn’t working.  So then I rebooted, went into the bios and configured SATA to emulate IDE.  Now before I go on, I should say that I still had a regular PATA drive with my old XP and data still hooked up.  I had to re-install XP Pro (good thing the CD was still in my software drawer) and it installed back over that PATA drive.  Then I could do any recovery operations from that XP system.  I also have many Live Linux discs to choose from too.  So now the troubleshooting and tediousness could begin.

Many steps:  The XP system could ‘see’ the first RAID drive (now as IDE), but could not see second.  I fired up Seagate’s SeaTools and found it was not there either.  Then I pulled power plug on the 2nd drive again and it respun up and it popped on the SeaTools window.  Hey, this is encouraging!  I performed a Short Drive test and it failed.  I performed a Long Drive Test and it didn’t fail until more than 50% through – then failed.  It recommended SeaTools for DOS, and I downloaded and created the bootup CD from the ISO.  After the reboot, I had to repull power plug on drive again to have it be recognized.  Then I performed a long test with the DOS version, which is supposed to try and repair any bad spots by replacing them with a hash (#?) or something, without moving the data bits.  Anyway, that failed at about 62% through – never finished even after many hours.  So at least I knew where the problem was on the drive!  OK, I had to think of something else.  I started searching online.  I was beginning to consider a data recovery company with a certified clean room.  I saw some of the prices and almost fell out of my chair – $1500 up to $9000 in some cases.  Yikes!  Since I just bought a new car this year, I didn’t want to shell out many more thousands of dollars.  I then found QueTekConsulting in Texas.  They had a software app called ‘Recoup’ which supposedly extracts images of broken drives and tries and tries and skips over bad data spots.  This was worth the money for me to try.  I also downloaded a trial of ‘Raid Recovery for Windows’ from another recovery site.  When I was able to coax the drive to show up (which was getting harder each time) with the power plug move, the Raid Recovery app saw the two drives and I chose ‘RAID 0’ and the next screen showed some filenames and folders with an $ in front which were completely unreadable and I couldn’t search for any file that I knew was there.  It said the total number of files was 17!  OK, so this app didn’t work.

Once I got the Recoup app registered, I started the image copy process.  I first tried on a 500G USB drive and it said there was not enough room (and it was completely empty and formatted in NTFS).  So I went out and got a 750G USB drive and a replacement 500G internal SATA drive.  I know I should get the exact same model of the failed drive, but it was no longer available.  I ended up with a WD with the same size cache, and I was going to cross my fingers.  My plan:

  • make an image of the failed drive
  • replace failed drive with new blank drive
  • move/copy image over to the new drive
  • switch AMD SATA back to RAID in Bios
  • reboot and hope to see original drives that were RAID

My original RAID set consisted of 300G for Windows and programs, and 600G for data and extra stuff.  This may have been what saved my ass.

I ran Recoup to copy over to new 750G USB drive.  This took 14 hours.  The good thing about this app is if you stop or lose power, it pick up where it left off.  It keep a detailed log as it goes along in a spare folder.  That is why I could not make an image to a drive of same size as failed drive.  I lost no power so it continued straight for 14 hours.  I was thinking of getting Acronis True Image because my Norton Ghost did not see the SATA drive (even when set as IDE).  I downloaded and tried HDClone and WinImage, and they both could not read the ‘.dsk’ image created by Recoup.  I even tried renaming the dsk to img.  Then I thought about ‘dd’ in Linux.  I use it all the time to create images for ARM boards (Beagleboard, PC104, etc).  I searched online to see if others had used it for that.  A few had, but the forums or blogs mentioned trying it for RAID 0 recovery, but specifically I could not find a reference where it successfully restored a drive.  They ended with “…I’ll give that a try” and then they did not return to fill us in.

I fired up Puppy Linux Live and it recognized the AMD SATA drives (Centos did not).  I made sure the first RAID drive was unplugged, then checked on the location of second drive.  It showed up as /dev/sda.  The USB drive with image mounted automatically, so I knew that information (sca1).  So, from a terminal window, I used this command:

   [root user#]  dd if=/dev/sca1/raid0-2.dsk of=/dev/sda bs=1k conv=sync,noerror

Crossing my fingers:  When copying was finished (about 1 hour), shut down the Live Linux.  Booted up and changed Bios back to RAID and SATA for all channels.  When Xp cam up again, the first thing I did was load RAIDXpert and I see it saw the two physical drives.  Then this is when I knew I was relieved – the first logical drive was online!  Before, even with the RAID set recognized with coaxing (with bad drive), both logical drives were always offline.  The good news for me was the system partition was this first logical drive.  It was all there!  I don’t know if I’ll ever get the 600G partition back, but that had misc data and video mostly.  All my music is on a 750G drive attached to the living Ubuntu server, so that was never in danger.  Although I think I had the music backed up on this 600G partition (my music probably tops out at 400-450G in storage space).

So, what am I doing now?  Making a fresh backup (non-compressed) of the system partition I temporarily lost on that 500G USB drive I was going to replace.  I am using RichCopy and using about 7 threads to multitask the backup (I have a 4 core processor for it to use).  Then, when I am sure I can get no more out of that image file – I will wipe the 750G clean and use that as my Vista image backup and never replace it with anything else.  I am going to reconfigure my drives (no RAID for syetm partition!) and bump up to Windows 7.  But I will do that after coming back from abroad.

Conclusion:  I think the 300G partition as the system allowed the RAID rebuild to keep the stripes intact.  I shudder to think if I had one big Terrabyte “C:” drive.  I bet I would have lost it all.  It is interesting to note that using a different brand drive in the RAID recovery process did not ruin my chances of recovery either.

Posted under Management Tags: , , , ,

WebDAV on Ubuntu (10.04)

Had an interesting go-around with WebDAV on a non-production test server with Apache2 and it seemed errors were almost random.  There were a number of factors involved.  First, I am testing for preparation of using multiple authors to update a website.  WebDAV was the logical choice.  Installing WedDAV was simple enough:

  • load or enable modules (a2enmod)  dav, dav_fs
  • load up authentication module(s) [I started with auth_digest]
  • insert directives like turning DAV on

Well, it seemed to work with default settings.  Since I was enabling this for authors, it was not used for a separate directory or folder, but the root folder.  The initial test using ‘cadaver’ went well.  It all went downhill once I started testing with Dreamweaver.  Apparently, no matter what I did, Dreamweaver would not use ‘auth digest’ mode.  Supposedly if you add PROPFIND to the <Limitexcept> directive, Dreamweaver should find and use Auth Digest.  Nope, didn’t work.  Since Apache thought the client was using ‘Auth Basic’, I decided to go ahead and use ‘Basic’.  So that meant I had to create a new password file with ‘htpasswd’ which I did.

htpasswd -c /var/run/apache2/.passwordfile

Well, after restarting Apache, I tested with cadaver and I had new errors.  Could not find ‘user’  One suggestion I found on the web said that you should name your user with the server name before it (much like the domainuser credentials in the Windows world) like this  servernameusername [the second slash is there because it ignores the first – whatever].  This did not work either.  Other mentioned it was because some Ubuntu and Debian builds did not link the ‘authz_basic’ module properly.  I had this listed in my ‘mods-enabled’ folder, so it was loaded.

One novel suggestion (and I used this method) was by placing all your DAV directives in the ‘dav_fs.conf’ file in the ‘mods-enabled’ folder.  Therebye automatically removing directives when DAV mods are unloaded.  So ‘cadaver’ still didn’t work after all this.  I knew if I could get ‘cadaver’ to see the user this time, I would be good to go, as Dreamweaver works with Auth Basic.

After all that messing around, it turned out to be pretty simple.  Originally, I had checked to make sure the Digest password file had proper permissions (group had to be www-data), so that was why that worked with ‘cadaver’.  When creating the new password file with htpasswd, it defaulted to group of root.  I changed that file to the www-data group and everything worked.  This is how my directives in dav_fs looks:

DAVLockDB /var/run/apache2/lock/davlock
DAVMinTimeout 600

<Location /webdav/>
      DAV On
      AuthType basic
      AuthName username
      AuthUserFile /var/run/apache2/.davpasswordfile
      <Limitexcept GET HEAD OPTIONS PROPFIND>
             Require valid-user
      </Limitexcept>
</Location>

My ‘alias’ directive is still located in virtual default file.  Good news is it works now!

Posted under Applications

Ubuntu 11 (Natty) on Beagleboard

I have no idea what messed up the microSD that had Ubuntu 10.10.  I loaded up a new SD with the latest 11.04 image (zcat and dd) and that went pretty smoothly.  Meanwhile, I decided to put Angstrom back on that corrupted DS card (worse case it wouldn’t work) withmy development PC (used Linux Mint presently).  Worked fine, but went blank after firing up gdm.  I got the image from the Narcissus site and custom made it with gnome and other extras.  I probably should have stuck with the console version.  Fired up good when I redid it with just the console bare bones version.  Then I made the mistake of adding the gdm package.  I wasn’t sure at the time if that was the issue.  It was.  I can’t remember if the SED card that came with it ever got to the gnome desktop.  I’m betting it has something to do with fact it is using HDMI, but not that concerned as I have Ubuntu working like a champ.

I was hoping with version 11 that the clock speed issue with the XM board would be solved.  Apparently the new kernel also chokes with the 1GHz beagleboard.  I had to go into the boot.scr in the /boot folder to change the boot parameters from 1000 to 800 for the CPU.  After doing that the speed came back (it was appalling before changing as it literally took 5 or 6 seconds for the prompt to come up after opening the terminal.  Before I made the boot script change, the USB cam was hardly working – it was extra grainy and barely moved fast enough to have a frame rate.  Focus was effected too.  After the fix, the cam worked as it should.

One thing that must be done if you are trying to interface with a serial port with Ubuntu 11 – change the parameters for the ftdi device you use.   For some reason, the probe of the the USB serial device (in this case an Arduino Nano) came out wrong.  I could get an arduino sketch uploaded once after choosing /dev/ttyusb0, but then it stopped working.  It repeatedly said there was no such device.  Well there wasn’t – nothing resembling that in /dev folder.  What had to be done is a modprobe to the device with proper hex numbers.  If you do an ‘lusb’, you will get all your devices listed on the USB bus.  You need to pay attention to the vender ID and product ID.  Most likely the vendor ID is correct.  The product ID for me was off.  I had a 6001 for my product ID and I thing dmesg reported a different number.  Regardless, it is a good idea to run modprobe again with correct numbers.  Mine was:

sudo modprobe ftdi_sio vendor=0x0403 product=0x6001

After that, I could communicate with arduino and there was a ttyUSB0 entry in the /dev folder.  Now I just have to work on my flowchart and coding before I starting piecing things together.

Posted under Operating Systems,Robotics

Beagleboard Ubuntu corrupted

I don’t know how it happened, but the Ubuntu 10.10 image and partitions I had on the Beagleboard got corrupted somehow.  This used to work flawlessly.  The little micro-SD card has been sitting in the slot of the board for about 9 months without being used.  I wonder if static from the cables still attached to Beagleboard caused corruption.  I booted and got all kinds of I/O errors.

I am now putting an Ubuntu 11.04 netbook image on anothe SD card and will try that. Perhaps during non-use, I should pull the flash cards out!

Posted under Uncategorized

Alfresco Backup & Restore

Alfresco has been working great for over a month now.  One thing that had been nagging me was a successful backup and restore.  I figured out the problem.  I wanted to test on a practice machine, so not to brick the production site.  That was the thorn in my side.  I could create a new Alfresco setup from scratch that worked fine.  I wanted to test with the real data though.  I figured if I could accomplish this – then disaster recovery would be solved too (bring data and site up on a completely new machine in short order).

The premise seemed simple enough, but execution was unsuccessful.  I tried all kinds of methods.  I stopped Alfresco and copied the complete alf_data folder over to new machine.  I thought that should work.  It didn’t.  Then I went through various cold and hot backup procedures, like:

  • backup mysql database (while Alfresco running) to an sql file
  • stop Alfresco
  • copy the data and lucene indexes (alf_data basically)
  • restart Alfresco on production machine and stop Alfresco on test machine
  • place the alf_data folder in proper place on test machine
  • restore the mysql database by using the sql file
  • restart Alfresco on test machine

The different combinations yielded different results, but mainly the indexes would be corrupted even though the database was backup up properly.  I read somewhere that using mysqldump alone is not good enough because it does not actually backup up all the data properly.  I used Webmin and got the 3 databases backed up automatically (alfresco, mysql, test).  So that was not the problem.  Stopping Alfresco and tarring the whole alfresco folder should have done it, but that didn’t even work.

The problem?  The test machine was a 32-bit version and the production machine was 64-bit.  At first, I thought it was because I was using Ubuntu 10.10 on test machine (production was Ubuntu 10.04).  There were just too many 64-bit libraries and binaries in the 64-bit version to manually, methodically switch out on the test machine.  Since I didn’t have a spare 64-bit box lying around, I built a new one.  I did have a 64-bit AMD processor lying around and found a spare motherboard and 2G RAM – so I put it together to complete the new test machine.  Installed the same Ubuntu 10.04 on it and got everything updated and running.  Well, after this it seemed like a piece of cake.  Creating a complete clone was as simple as untarring the the opt/alfresco folder and contents to the new machine, starting Alfresco and I was golden.  Things have to be the same for it to be a total clone, like having the same SMTP server running on it.  I didn’t have Postfix running initially, so the email portions failed.  But it was sigh of relief to see the site come up without a hiccup.  OK, this works flawlessly if everything is in the same ‘/opt/alfresco’ folder (like the default).  But data really should be on a different partition or spindle (different hard drive).

So my next task was moving the data to a different area.  This is where you supposedly should have the databases backup properly and the actual data saved (in the right order).  But moving the data was actually simpler than that.  Assuming you have a stable data store, all you have to do is:

  • stop Alfresco
  • copy alf_data folder with GUI or tar it (-cfv)
  • paste the folder to new location, or untar (-xfv)
  • delete contents of /opt/alfresco/tomcat/temp
  • rename original alf_data folder to something else
  • if you don’t have a huge site, delete backup lucene indexes in new alf_data location
  • delete lucene index folder in new alf_data location
  • make sure you have the ‘index.recovery.mode=AUTO’ line  in global properties
  • change location of data ‘dir.root=/new location’ in global properties
  • restart Alfresco

This worked without a hitch.  I restarted a second time after deleting the alfresco and tomcat logs to get a clean picture of current system and site.  The site worked as it should with data in new location.  Note that I did not have monkey around with mysql database to get this to work.  This is because the mysql database was shut down properly and databases and actual data files were not corrupt.  This method wil NOT work if there is a problem (main reason why you backup and restore).  This is where the backup/restore procedures come into play.  Having data in different drive or partition adds an extra step because you must rely on the mysql database to bring back the indexing properly.  See this:  http://wiki.alfresco.com/wiki/Backup_and_Restore

Posted under Applications
1 2 3 4