NodeMCU (ESP8266) upload sketches wirelessly

More on the NodeMCU (V1)…..  This little booger is great!  I have tried all kinds of wireless upload solutions with Arduino (Olimexino with ESP8266, Nano with various wireless devices) and the end result of actually uploading the sketch never succeeded.  It all had to do with the reset aspect which causes the bootloader to accept code and overwrite existing code.  I tried all kinds of elaborate methods: softserial, trigger words that closed reset pin, and a few others I can’t remember now.  I just wanted a way to be able to update firmware/sketches while the device was far away and locked up without having to hookup to a usb cable to do updating.

Enter the NodeMCU version of the ESP8266.  Out of the box it works with the Arduino IDE.  No need to mess with LUA.  Once I got the latest ESP8266 code from Github to replace what was in the IDE, everything worked like a charm.  On my Windows 7 PC, that library went into appdata\local\arduino15\packages\ESP8266.  The version that really worked was 2.3, as the version 2.2 that came with IDE didn’t really work with wireless upload.  The key is the ArduinoOTA library that was fixed and updated.  The key for me getting it to work was with the reset.  In the default code, the reset is called when the OTA function failed.  To get it to really work, you do the reset after the upload and Bingo! it all works.

OTA port selection

OTA port selection

After your first USB sketch upload with the OTA library and code in place, you have to close down Arduino IDE, and then restart IDE for the OTA port to show up.  Not doing this step at first made me think the library was not working.  The image above shows the OTA port under ‘network ports’.  This capability all comes from Arduino Yun wifi procedures, so anyone with that board would be familiar with the network port in the list.

There really isn’t much more to it than that.  The only downside as far as I can tell is the serial monitor falls out of the mix. You cannot use network port with serial monitor because it relies on Yun firmware that asks for SSH password.  You can still use the USB port powering the nodeMCU board if you use another terminal program (like Putty) to see what is coming over serial.

My solution was to hookup an OLED screen to the board and display anything I needed to monitor.  The beauty of using an I2C OLED panel is it only uses 2 wires.  I have a TFT LCD panel which is SPI, and still may use this in the end as it has SD card storage as a bonus.  But for testing, the OLED is easy and just works.

OLED panel on nodeMCU

OLED panel on nodeMCU

Here is the sketch code that grabs a static IP address and allows over-the-air uploading.  It also has the OLED bits for variable monitoring and feedback.

#include <ESP8266WiFi.h>
#include <ESP8266mDNS.h>
#include <WiFiUdp.h>
#include <ArduinoOTA.h>
#include <Wire.h>
#include “SSD1306.h” // alias for `#include “SSD1306Wire.h”`
SSD1306  display(0x3c, D2, D1);

const char* ssid = “your_local_ssid”;
const char* password = “your_wifi_password”;
const char* host = “ESP-OTA”;

#define led_pin BUILTIN_LED
#define blu_pin D4
#define beat 450
IPAddress ip(192, 168, 1, 196); //Node static IP
IPAddress gateway(192, 168, 1, 1);
IPAddress subnet(255, 255, 255, 0);
void setup() {

  // Initialising the UI will init the display too.


  pinMode(led_pin, OUTPUT);


  WiFi.begin(ssid, password);
  WiFi.config(ip, gateway, subnet);
  while (WiFi.waitForConnectResult() != WL_CONNECTED) {
    WiFi.begin(ssid, password);
    Serial.println(“Retrying connection…”);
  ArduinoOTA.onStart([]() { // switch off all the PWMs during upgrade
    analogWrite(led_pin, 0);

  ArduinoOTA.onEnd([]() { // do a fancy thing with our board led at end
    for (int i = 0; i < 80; i++)
      digitalWrite(led_pin, HIGH);
      delay(i * 2);
      digitalWrite(led_pin, LOW);
      delay(i * 2);

  ArduinoOTA.onError([](ota_error_t error) {

  /* setup the OTA server */
  Serial.print(“IP address: “);
  Serial.println(“Now Online!”);

void loop() {
  // clear the display
  float power = WiFi.RSSI();
 String strpower = String(power, 1);
  display.drawString(0, 18, ssid);
  display.drawString(0, 32, “POWER: ” + strpower);
  analogWrite(blu_pin, power);

void heartbeat() {
  digitalWrite(led_pin, HIGH);
  digitalWrite(led_pin, LOW);

Posted under Applications,Sensors

WebDAV on Ubuntu (10.04)

Had an interesting go-around with WebDAV on a non-production test server with Apache2 and it seemed errors were almost random.  There were a number of factors involved.  First, I am testing for preparation of using multiple authors to update a website.  WebDAV was the logical choice.  Installing WedDAV was simple enough:

  • load or enable modules (a2enmod)  dav, dav_fs
  • load up authentication module(s) [I started with auth_digest]
  • insert directives like turning DAV on

Well, it seemed to work with default settings.  Since I was enabling this for authors, it was not used for a separate directory or folder, but the root folder.  The initial test using ‘cadaver’ went well.  It all went downhill once I started testing with Dreamweaver.  Apparently, no matter what I did, Dreamweaver would not use ‘auth digest’ mode.  Supposedly if you add PROPFIND to the <Limitexcept> directive, Dreamweaver should find and use Auth Digest.  Nope, didn’t work.  Since Apache thought the client was using ‘Auth Basic’, I decided to go ahead and use ‘Basic’.  So that meant I had to create a new password file with ‘htpasswd’ which I did.

htpasswd -c /var/run/apache2/.passwordfile

Well, after restarting Apache, I tested with cadaver and I had new errors.  Could not find ‘user’  One suggestion I found on the web said that you should name your user with the server name before it (much like the domainuser credentials in the Windows world) like this  servernameusername [the second slash is there because it ignores the first – whatever].  This did not work either.  Other mentioned it was because some Ubuntu and Debian builds did not link the ‘authz_basic’ module properly.  I had this listed in my ‘mods-enabled’ folder, so it was loaded.

One novel suggestion (and I used this method) was by placing all your DAV directives in the ‘dav_fs.conf’ file in the ‘mods-enabled’ folder.  Therebye automatically removing directives when DAV mods are unloaded.  So ‘cadaver’ still didn’t work after all this.  I knew if I could get ‘cadaver’ to see the user this time, I would be good to go, as Dreamweaver works with Auth Basic.

After all that messing around, it turned out to be pretty simple.  Originally, I had checked to make sure the Digest password file had proper permissions (group had to be www-data), so that was why that worked with ‘cadaver’.  When creating the new password file with htpasswd, it defaulted to group of root.  I changed that file to the www-data group and everything worked.  This is how my directives in dav_fs looks:

DAVLockDB /var/run/apache2/lock/davlock
DAVMinTimeout 600

<Location /webdav/>
      DAV On
      AuthType basic
      AuthName username
      AuthUserFile /var/run/apache2/.davpasswordfile
             Require valid-user

My ‘alias’ directive is still located in virtual default file.  Good news is it works now!

Posted under Applications

Alfresco Backup & Restore

Alfresco has been working great for over a month now.  One thing that had been nagging me was a successful backup and restore.  I figured out the problem.  I wanted to test on a practice machine, so not to brick the production site.  That was the thorn in my side.  I could create a new Alfresco setup from scratch that worked fine.  I wanted to test with the real data though.  I figured if I could accomplish this – then disaster recovery would be solved too (bring data and site up on a completely new machine in short order).

The premise seemed simple enough, but execution was unsuccessful.  I tried all kinds of methods.  I stopped Alfresco and copied the complete alf_data folder over to new machine.  I thought that should work.  It didn’t.  Then I went through various cold and hot backup procedures, like:

  • backup mysql database (while Alfresco running) to an sql file
  • stop Alfresco
  • copy the data and lucene indexes (alf_data basically)
  • restart Alfresco on production machine and stop Alfresco on test machine
  • place the alf_data folder in proper place on test machine
  • restore the mysql database by using the sql file
  • restart Alfresco on test machine

The different combinations yielded different results, but mainly the indexes would be corrupted even though the database was backup up properly.  I read somewhere that using mysqldump alone is not good enough because it does not actually backup up all the data properly.  I used Webmin and got the 3 databases backed up automatically (alfresco, mysql, test).  So that was not the problem.  Stopping Alfresco and tarring the whole alfresco folder should have done it, but that didn’t even work.

The problem?  The test machine was a 32-bit version and the production machine was 64-bit.  At first, I thought it was because I was using Ubuntu 10.10 on test machine (production was Ubuntu 10.04).  There were just too many 64-bit libraries and binaries in the 64-bit version to manually, methodically switch out on the test machine.  Since I didn’t have a spare 64-bit box lying around, I built a new one.  I did have a 64-bit AMD processor lying around and found a spare motherboard and 2G RAM – so I put it together to complete the new test machine.  Installed the same Ubuntu 10.04 on it and got everything updated and running.  Well, after this it seemed like a piece of cake.  Creating a complete clone was as simple as untarring the the opt/alfresco folder and contents to the new machine, starting Alfresco and I was golden.  Things have to be the same for it to be a total clone, like having the same SMTP server running on it.  I didn’t have Postfix running initially, so the email portions failed.  But it was sigh of relief to see the site come up without a hiccup.  OK, this works flawlessly if everything is in the same ‘/opt/alfresco’ folder (like the default).  But data really should be on a different partition or spindle (different hard drive).

So my next task was moving the data to a different area.  This is where you supposedly should have the databases backup properly and the actual data saved (in the right order).  But moving the data was actually simpler than that.  Assuming you have a stable data store, all you have to do is:

  • stop Alfresco
  • copy alf_data folder with GUI or tar it (-cfv)
  • paste the folder to new location, or untar (-xfv)
  • delete contents of /opt/alfresco/tomcat/temp
  • rename original alf_data folder to something else
  • if you don’t have a huge site, delete backup lucene indexes in new alf_data location
  • delete lucene index folder in new alf_data location
  • make sure you have the ‘index.recovery.mode=AUTO’ line  in global properties
  • change location of data ‘dir.root=/new location’ in global properties
  • restart Alfresco

This worked without a hitch.  I restarted a second time after deleting the alfresco and tomcat logs to get a clean picture of current system and site.  The site worked as it should with data in new location.  Note that I did not have monkey around with mysql database to get this to work.  This is because the mysql database was shut down properly and databases and actual data files were not corrupt.  This method wil NOT work if there is a problem (main reason why you backup and restore).  This is where the backup/restore procedures come into play.  Having data in different drive or partition adds an extra step because you must rely on the mysql database to bring back the indexing properly.  See this:

Posted under Applications

Dipping into Alfresco

For our office, I needed to incorporate another collaboration site for our team on a project.  I had already used MS Sharepoint on a previous 3 year project.  The interesting thing is the users could never warm up to Sharepoint on that project.  Yes, Sharepoint is customizable, and I did a minimal amount with logo changes and the like.  But I think folks just found useability a little bumpy.  The main place for collaboration on that project?  Ended up being a plain old FTP site (which we always had) with one virtual directory/site as an upload area, and another virtual directory/site as the read-only repository.  Of course, that meant I had to do the moving of permanent documents and files to the ‘download-only’ site.

Enter ‘open source’ solutions for collaboration.  There a a few out there that do collaboration and ECM/WCM, like Drupal and Alfresco.  Based on the reviews and forums, I chose Alfresco.  Alfresco has two forks for end users: Enterprise paid-for model that is stable and comes with support, and the ‘community’ version that is pure open source with GPL and the whole 9 yards.  The community version is a little bit on the bleeding edge.  For instance the ‘beta’ which is available for download now is 3.4e, where the somewhat stable version is 3.4d (which I am using).  All-in-all, I am very impressed with collection of coding that is essentially java-based with a ‘spring surf’ model as the framework.

What are the main reasons I chose Alfresco?  Main reason is the Sharepoint compatibility.  Your Microsoft Office apps do not know the difference.  It works seamlessly from within the Office apps, or from the Alfresco server using “inline” editing.  Another reason is cost.  Well, that is a gray area.  Sometimes when using open source with no official support, you are on your own and a production environment can be at risk of strange things (this is the case with Alfresco too – more on that later) happening.  But the cost of the Community Edition is nothing.  The hardware is up to you (I have a 64-bit Linux box running Ubuntu 10.04).  To get basic Sharepoint functionality, I would have had to get Windows Server 2008 Web Edition (minimum entry costwise), and configured the Sharepoint Foundation (sharepoint services) to work for external token-based security logins, as I am not using our Active Directory.  Why?  because this project, like the last is a statewide project that involves people outside our organization.  Sharepoint does work that way using form-based authentication, but it wasn’t easy out-of-the box.  I setup a testbed version of Alfresco with the binary installer and it works with external users from the get-go.

Up and running:  It took two installs to get Alfresco running properly.  The install uses a binary ‘.bin’ file that aparently works on many versions of Linux (did I mention Alfresco is also available for Microsoft X86 and X64 boxes?).  My first install failed because I already had Tomcat on my box and it conflicted horribly.  That was the main reason, another was my SMTP server which was Zimbra.  So I uninstalled Alfresco, Zimbra, Tomcat, and MySQL.  I now had a clean Ubuntu server which still had Apache2 (I also deleted all virtual web sites for good measure).  Then I reinstalled Alfresco using defaults (I chose to have it install MySQL).  Once installed, it resided in /opt/alfresco directory.  I liked this idea better anyway, especially if this box was only used for this collaboration site/server.  All pertinent files are contained in one directory tree.  Nightly back-ups simply backup the /opt/alfresco directoty and everything gets saved like a bare-metal backup of a complete server.  If something goes horribly wrong, a simple replacement of that directory tree brings everything back the way you want it.  If you set it up as a service, then you have a script in your /etc/init.d directory to start it up and that would be the only other area related to Alfresco involved in getting it running.  If you don’t want it running at boot, you run from script (/opt/alfresco/ start).

Next I had to get my SMTP server installed.  I chose Postfix for simplicity.  There are a number of things you must do to get your email server to work with Alfresco for mailing out invites (the default model for getting users on the share site).  These are mostly taken care of in the global properties file (  Depending on your install, the best way to find where it is is (in Linux) “find -name alfresco-global*”.  You must set the server to know your email setup.  Like this:

##Email Outgoing ####

The interesting thing about those settings is the default “from” does not work.  You have to go into the actual template in your repository to make that change.  You have to log into the share site as ‘admin’ to do this.  This is the file you want to change under ‘data dictionary’ in repository:

Email Invite

That should take care of email invites.  One other thing that is catastrophic if not taken care of.  In all the install and setup blogs, I did not see this mentioned.  It is pretty major.  It may be a moot point after the next version, but version 3.4d remains broken in the install binary.  It is this:  All goes well for your share site (days, weeks even) until someone uploads a PDF that messes with the java library.  Apparently not all PDFs cause the issue, but all it took is one with our site.  This can actually bring your site to its knees.  After that nothing works, including Tomcat.  If I had known about this beforehand, I would not have a live site come down (took 5 hours to find and solve issue).  What you get is an Apache page that says the ‘service is temporarily unavailable’ with ‘due to maintenance downtime or capacity problems’.  It happens after the upload and then if any user navigates to the ‘document library’…..boom! everything is hosed.  The real tricky part is getting it working right again.  If it was early in the day, a restore of the Alfresco tree would be a simple cure, but underlying problem still has to be corrected.  In my case it was in the afternoon and all the morning data would be lost.  restarting Tomcat throws errors because it died without getting rid of the ‘pid’ file in the tomcat ‘/bin’ directory.  What I did to get it back on its feet to do a proper shutdown was delete stop alfresco with script (with errors).  Then I deleted all files in /opt/alfresco/tomcat/temp and delete and catalina.out in the /opt/alfresco/tomcat/bin directory.  This can get the site up and running again after running start script.  Of course, site goes down horribly the moment a user goes into repository again.  Boy, I wish I knew this information before site went live.  Anyway,  it has to do with two files – the pdfbox and fontbox files that relate to the java JDK files in the library.  Alfresco 3.4d shipped with version 1.2 of PDFbox and Fontbox.  A number of people on the net said that installing or upgrading to ‘openjdk-1.6’ vs. Sun or other versions.  Well that is exactly what I had, so that wasn’t the issue for me.  It is the actual PDFbox and Fontbox .jar files.  here’s the weird thing, replacing with current version 1.6 does not fix it, in fact it really screws up Tomcat and all the log files show tons of errors.  I like to delete current logfiles after each fix so I get pure output from current fix.  The key for me was replacing the .jar files with version 1.3.  That fixed everything – all stable again.  You get them here (go to older release section for 1.3):   You simply replace the old 1.2 versions with the 1.3 versions (do not rename them, leave as 1.3), but you must remove or delete the 1.2 versions.  I left the 1.2 versions in with ‘.old’ appended to name and it didn’t work, so I moved them to an inert directory.  Where are they in the Tomcat tree?  Right here:  /opt/alfresco/tomcat/webapps/alfresco/WEB-INF/lib

I hope this helps you before you get into trouble like I did.

Posted under Applications