A Simple Pattern for Alfresco Extensions

Over the years I have worked with and for Alfresco, I have written a ton of Alfresco extensions.  Some of these are for customers, some are for my own education, some for R&D spikes, etc.  I’d like to share a common pattern that comes in handy.  If you are a super experienced Alfresco developer, this article probably isn’t for you.  You know this stuff already!

There are a lot of ways to build Alfresco extensions, and a lot of ways to integrate your own code or connect Alfresco to another product.  There are also a lot of ways you might want to call your own code or an integration, whether that is from an Action, a Behavior, a Web Script, a scheduled job, or via the Alfresco Javascript API.  One way to make your extension as flexible as possible is to use what could informally be called the “Service Action Pattern”.

The Service Action Pattern

service_action_pattern_sequence (1)

Let’s start by describing the Service Action Pattern.  In this pattern, we take the functionality that we want to make available to Alfresco and we wrap it in a service object.  This is a well established pattern in the Alfresco world, used extensively in Alfresco’s own public API.  Things like the NodeService, ActionService, ContentService, etc all take core functionality found in the Alfresco platform and wrap it in a well defined service interface consisting of a set of public APIs that return Alfresco objects like NodeRefs, Actions, Paths, etc, or Java primitives.  The service object is where all of our custom logic lives, and it provides a well defined interface for other objects to use.  In many ways the service object serves as a sort of adapter pattern in that we are using the service object to translate back and forth between the types of domain specific objects that your extension requires and Alfresco objects.  When designing a new service in Alfresco, I find it is a best practice to limit the types of objects returned by the service layer to those things that Alfresco natively understands.  If your service object method creates a new node, return a NodeRef, for example.

A custom service object on its own isn’t terribly useful, since Alfresco doesn’t know what to do with it.  This is where an Alfresco Action comes in handy.  We can use one or more Alfresco Actions to call the services that our service object exposes.  Creating an action to call the service object has several advantages.  First, once you have an Action you can easily call that Action (and thus the underlying service object) from the Javascript API (more on this in a moment).  Second, it is easy to take an Action and surface it in Alfresco Share for testing or so your users can call it directly.  Actions can also be triggered by folder rules, which can be useful if you need to call some code when a document is created or updated.  Finally, Actions are registered with Alfresco, which makes them easy to find and call from other Java or server side Javascript code via the ActionService.  If you want to do something to a file or folder in Alfresco there is a pretty good chance that an Action is the right approach.

Using the Service Action Pattern also makes it simple to expose your service object as a REST API.  Remember that Alfresco Actions can be located and called easily from the Javascript API.  The Javascript API also happens to be (IMHO) the simplest way to build a new Alfresco Web Script.  If you need to call your Action from another system (a very common requirement) you can simply create a web script that exposes your action as a URL and call away.  This does require a bit of boilerplate code to grab request parameters and pass them to the Action, which in turn will call your service object.  It isn’t too much and there are lots of great examples in the Alfresco documentation and out in the community.

So why not just bake the code into the Action itself?  Good question!  First, any project of some complexity is likely to have a group of related functionality.  A good example can be found in the AWS Glacier Archive for Alfresco project we built a couple years ago at an Alfresco hack-a-thon.  This project required us to have Actions for archiving content, initiating a retrieval, and retrieving content.  All of these Actions are logically and functionally related, so it makes sense to group them together in a single service.  If you want the details of how Alfresco integrates with AWS Glacier, you just have to look at the service implementation class, the Action classes themselves are just sanity checks and wiring.  Another good reason to put your logic into a service class is for reuse outside of Actions.  Actions carry some overhead, and depending on how you plan to use it you may want to make your logic available directly to a behavior or expose it to the Alfresco Javascript API via a root scope object.  Both of these are straightforward if you have a well defined service object.

I hope this helps you build your next awesome Alfresco platform extension, I have found it a useful way to implement and organize my Alfresco projects.

Alabama Cyber Now Conference Recap


Birmingham’s tech scene keeps getting better!  Yesterday was the second annual Alabama Cyber Now security conference hosted by TechBirmingham, the Central Alabama Chapter of the Information Systems Security Association (CAISSA) and the InfraGard Birmingham Members Alliance.  This conference brings together security and other technology professionals from across the southeast for a day of engaging talks, panels and networking, along with a sizable vendor hall.  Last year’s event was so successful that this year they had to move to a bigger venue.

I’m not a security specialist, but I do think that everybody that works in technology needs to have a working knowledge of core security concepts.  In my day job I work with technology leaders in all sorts of organizations from government agencies to brand marketing firms, from insurance and banking to healthcare.  Security topics come up all the time.  Single day events like the Alabama Cyber Now conference provide a great chance to learn from experts in the field about new threat types, some security best practices and to get a glimpse into how they look at securing their systems and those of their customers.  For those that need them the Alabama Cyber Now conference also provided a chance to earn some CPE credits, a requirement for maintaining CISSP and other security certifications.

The conference was generally very well run, with a few exceptions.  The check in lines got pretty long, leading to some delays and it looked like there were people still waiting to check in when the morning keynote was starting.  Maybe next year it would be helpful to do earlier check in or have more people doing it.  Coffee was hard to come by, I didn’t see any stations set up outside of the morning keynote and lunch.  Conferences run on coffee.  The badges had people’s names and organization printed in a pretty small font, which made it hard to just glance at somebody’s badge to see who they are and what org they were with.  But really, those are tiny little things that didn’t affect the overall conference experience.  The conference team did an awesome job putting on the event, and I’m grateful we had the opportunity to have such diverse group of speakers delivered right to our doorstep.

The Talks

The highlight of any event is usually the keynote(s), and this event was no exception.  Dave Shackleford was the morning keynote, talking about the challenges and opportunities in cloud security.  Dave brought up some great points and provided some guidance for how to integrate security into the DevOps processes, which he is terming DevSecOps.  Developers and ops won’t tolerate security slowing down their cycles, so security needs to find ways to automate and integrate with the cycles they are already executing.  Velocity means everything today for competitiveness.  If I recall correctly, Dave also declared the old idea of a “bullseye” security model with perimeters to be dead.  This was a theme that was repeated in several other talks in varying ways.

The second keynote is the one that I was really looking forward to.  Bruce Schneier is one of the best known and most respected voices in security.  Bruce lived up to his reputation, delivering a riveting talk about IoT security and how the internet of things completely changes the way we should be looking at security.  The stakes are much higher now, it’s not just data.  When everything has a computer in it software security becomes the security of everything.  We have essentially given the internet the ability to gather data from the physical world (sensors), to make decisions based on that data (compute / AI) and to affect the physical world (actuators).  Bruce described it as humanity inadvertently building “A world sized robot”.  Essentially, his argument is that we aren’t intentionally building this world sized thing, it’s an emergent property of hooking up billions of sensors and actuators to a global network.  Bruce ended his talk with a call for regulation to help us avoid the worst consequences of poor IoT security (like the Dyn attack).  This made several folks in the audience visibly uncomfortable and sparked a few questions during the Q&A session that followed the talk.

Speaking of the Dyn attack, another talk I really enjoyed was one in which we got to hear from Chris Baker at Dyn.  He gave a detailed timeline of the attack, how it worked and what strategies his team and others used to map and mitigate the attack, and how they are preparing for the next one.   The other talks I attended included how pen testers (and malicious actors) approach phishing, security analytics in a world where each device and user is its own “front” in a war between attackers and defenders, and a great talk on the role of privilege escalation and lateral movement in a breach.  The last one I listed was delivered by Andy Givens from CyberArk, and was one of my favorite breakouts.  I especially enjoyed the case studies where he walked through how the hackers got a foothold, and how they expanded from that initial landing.

Next Year

Assuming this event happens again next year, there are a few things I would love to see added.  A few “101” style talks would be great as an introduction to areas of security that one might not be familiar with.  I’d also like to see some developer focused talks.  Developers can always get better at building more secure applications, evaluating libraries for security concerns before they adopt them, etc.  Maybe a talk could be added for DevOps folks that integrates parts of Dave Shackleford’s DevSecOps model.

All in all, it was a good event, and one that I hope continues next year.  Thanks to all of the organizers, volunteers, sponsors and vendors that made it happen!

Living Behind the Curve – Technology Adoption in Alabama


I’d like to preface this article by saying that it isn’t intended to bash Alabama.  I’ve lived most of my life here, and despite leaving several times to pursue job and educational opportunities, I keep coming back.  I love the south, and I want it to succeed.

These days I’m pretty hot on Birmingham.  We have a growing, passionate tech scene, the city center has sprung back to life and thanks to UAB there is a diverse influx of smart people moving here every day.  There’s a lot to like about living in Birmingham.  The cost of living remains low, the people are friendly, the weather is great and the food is expanding my waistline without draining my wallet.  However, Birmingham is a bit of a bubble.  It’s easy to get swept up in the spirit and energy of the city and forget about the context in which it exists.

Life Behind the Curve

Anecdotal evidence suggests that states like Alabama have systemic features that delay the adoption of innovations.  We look to other states and see things like ride sharing, solar power, deep broadband penetration, fiber networks or municipal wifi, open data / eGovernment initiatives, mobile payments, smart cities projects, technology startups and many other innovations and we wonder “Why not here?”.  It can probably be explained by several factors.

Everett Rogers originally proposed the “diffusion of innovations” theory, and along with it the notion that the diffusion of innovations roughly followed a bell curve distribution.  New ideas (or technology, in this case) follow a curve in which innovators and early adopters pick it up first, followed by early and late majorities, and finally by the laggards.


Rogers describes the five groups thusly (Diffusion of Innovation, Rogers, 1962) :

Adopter category Definition
Innovators “Innovators are willing to take risks, have the highest social status, have financial liquidity, are social and have closest contact to scientific sources and interaction with other innovators. Their risk tolerance allows them to adopt technologies that may ultimately fail. Financial resources help absorb these failures.”
Early adopters “These individuals have the highest degree of opinion leadership among the adopter categories. Early adopters have a higher social status, financial liquidity, advanced education and are more socially forward than late adopters. They are more discreet in adoption choices than innovators. They use judicious choice of adoption to help them maintain a central communication position.”
Early Majority “They adopt an innovation after a varying degree of time that is significantly longer than the innovators and early adopters. Early Majority have above average social status, contact with early adopters and seldom hold positions of opinion leadership in a system.”
Late Majority “They adopt an innovation after the average participant. These individuals approach an innovation with a high degree of skepticism and after the majority of society has adopted the innovation. Late Majority are typically skeptical about an innovation, have below average social status, little financial liquidity, in contact with others in late majority and early majority and little opinion leadership.”
Laggards “They are the last to adopt an innovation. Unlike some of the previous categories, individuals in this category show little to no opinion leadership. These individuals typically have an aversion to change-agents. Laggards typically tend to be focused on “traditions”, lowest social status, lowest financial liquidity, oldest among adopters, and in contact with only family and close friends.”

Take a look at the words used to describe the innovators and early adopters.  Things like “financial resources”, “closest contact to scientific sources”, “advanced education” and “opinion leadership”.  Now look at the statistics about Alabama.  Alabama consistently sits near the bottom of rankings for educational quality and attainment.  We are one of the poorest states in America, ranking near the middle for GSP (Gross State Product) but close to the bottom for GSP per capita.  Our unemployment rate remains stubbornly high at 6.2% (3rd worst in the country) at the time this was written.  Alabama also has one of the highest income equality gaps (as measured by the Gini coefficient) in the United States.  Perhaps most worrisome is the fact that Alabama is one of the states that consistently loses college graduates year over year.  Statistically, one would expect to see fewer people that fit into the innovators or early adopter category in a state with Alabama’s economic and educational profile.

We can see similar trends in the way late adopters and laggards are described.  Words like “traditional”, “little financial liquidity” and “older” are used to describe folks on the trailing edge of the adoption curve.  Tradition and heritage are a big part of the culture in Alabama and according to the 2010 census Alabama ranks in the bottom 10 in terms of the percentage of the population that lives in urban areas.  Population density means more opportunity to be exposed to new ideas and to have a larger social circle.  Age also plays a role.  Alabama isn’t the oldest state in the union, but it is in the bottom half by average age.  Culturally and demographically, Alabama exhibits many characteristics that lean toward later adoption of innovation.

It seems that the Rogers’ theory explains in large part why we don’t see faster and deeper technology adoption in this state, but it doesn’t tell the whole story.

The Effect of Policy

The effect of public policy on innovation adoption is profound.  Several categories of adopter in Rogers’ model indirectly reference cost, usually in terms of an adopter’s financial resources.  It stands to reason that if the cost is higher for a given innovation then the financial resources required to adopt it will also be higher, thus reducing the pool of people that could fall into the innovators or early adopters category even if they were otherwise inclined to pick up a new innovation.  Policy can directly influence adoption cost by driving it down through subsidy, favorable tax treatment and streamlined regulation.   It can also go the other way and drive cost up by making adoption more complex, favoring competing legacy technologies with tax breaks or implementing regulatory hurdles for new competitors.

Unfortunately Alabama’s leadership hasn’t, well, led.  Take solar power as an example.  This innovation is seeing broad adoption across the US, helping to offset power usage during daytime peak periods and providing good paying technical employment.  Some states have taken steps to accelerate solar rollout, implementing net metering requirements, tax credits and other policies to drive down the cost and make it accessible to more adopters.  Alabama, despite having ample sun as a resource and needing the extra power during sunny hot days to offset the peak loads created by air conditioning has not only taken almost no meaningful steps toward making solar more affordable but has in fact implemented unnecessary regulatory hurdles.  Ride sharing provides another example.  The Birmingham City Council fought to keep ride sharing services from launching in the city, asking for onerous regulation that almost kept this innovation out entirely.  Now Alabama is proposing mandatory content filters on all mobile devices, with a fee to remove it.  This sort of thinking absolutely drives down adoption by reducing choice and increasing costs.  I’m not going to speculate about the reasons for all of these artificial barriers, but the outcomes are clear.

Automation, Adoption and Jobs

Why do we need to be concerned about technology adoption in Alabama?  In a word:  Jobs.  Right now a lot of job growth (especially automotive) in Alabama is driven by low cost labor, which is in turn enabled by the low cost of living in the state.  The massive tax breaks the state gives to large employers don’t hurt either.  Other large employment categories in the state include retail and cashiers, material handlers, agriculture and truck drivers.  All of these jobs are ripe for disruption by automation.  As soon as the price of automation drops below the cost of labor, these workers will begin to be replaced.  This is, in my opinion, the best case scenario.  The worst case is that our lack of technology adoption will lead us to resist automation, which will ultimately make us uncompetitive and lead to the current influx of economic activity turning quickly into an exodus.  If we don’t solve the adoption problem, not only will we lose the economic growth we have fought so hard to gain, but we won’t be able to ride the next wave of jobs that will come from automation.

What do we need?

It’s a lot to ask, but what we need are massive investments in all levels of education, restructuring of our tax system to put the means for adoption into the hands of more people and more innovation friendly regulation all coordinated in a single large push to prepare us for the future.  We have an opportunity to use our currently improving economic situation to make the kind of strategic investments that will prepare Alabama for the next century, but we need to start now.  Will our leaders answer the call?

Alfresco Premier Services – New Blog

Screen Shot 2017-03-27 at 9.14.38 PM

I’m not shy about saying the best thing about my job is my team.  Never in my career have I worked with such a dedicated, skilled and fun group of people.  Whether you are talking about our management team or our individual contributors, the breadth and depth of experience across the Alfresco Premier Services team is impressive.  Members of our team have developed deep expertise across our product line, from auditing to RM, from workflow to search.  Some folks on the team have even branched out and started their own open source projects around Alfresco.  We have decided to take the next step in sharing our knowledge and launch a team blog.

The inaugural post details some recent changes to the Alfresco Premier Services offerings that coincide with the digital business platform launch.  We will follow that up in short order with new articles covering both content and process services including guidance on FTP load balancing and pulling custom property files into a process services workflow.

Lots of exciting stuff to come from the Premier Services Team, stay tuned!

A Few Thoughts on TEDxBirmingham

Screen Shot 2017-03-27 at 2.31.04 PM

Birmingham has come a long way since the days when the city was known as a dying steel town where a heavy handed local government turned fire hoses and police dogs on peaceful civil rights protestors.  Today the city is better known for its leading edge medical research, its commitment to civil rights education and preserving its history, and its burgeoning food scene.  It’s a city that has its eyes firmly planted on the future, made stronger by the mistakes of its past.  Nowhere is this shift more evident than at TEDxBirmingham.

For those that don’t know, TEDx conferences are independently organized and licensed conferences in the spirit of the larger TED conferences.  These things have popped up all over the world with more than 15000 events to date generating over a billion talk views.  That’s some serious reach for events largely put together by volunteers.  Four years ago, a group of people in Birmingham, Alabama decided to get in on the action and start a local TEDx conference.  This was my first year to attend, and I was absolutely floored at the professionalism, passion and energy on display.  It’s a reflection of the direction Birmingham as a whole is headed.  I’d like to put down a few thoughts before the details fade into memory.  The conference helpfully provided a program notebook in the goody bag, complete with a section to jot down notes on each talk.

The Talks

All of the speakers were outstanding in their own way.  It’s clear that the coaching and practice for speakers paid off.  There’s a certain cadence and consistency to the way TED speakers present.  They don’t typically do a big introduction about themselves, instead launching right into the content.  They take you on a journey, instead of just walking through some slideware.  They are well rehearsed, sharing their story without notes.  Most importantly, they are concise.  Short, even.  The idea is distilled into a concentrated essence, delivered as a personal story that brings the audience along in a way that seems almost hypnotic, effortless.  A few of my favorite moments:

Brian Reaves was the first speaker of the day, and provided a strong open for the conference.  Brian is an illusionist, which I thought an odd choice for a TED style event, especially as the opener.  That thought disappeared along with several of his props on stage as he talked about how magic forces you to see the impossible as possible, and how reframing a problem or letting go of your current perspective can completely change the way you approach things.  Start from “It can be done, I just need to figure out how”.  Having Brian kick off the conference helped everybody open their minds a little bit for the rest of what followed.

Later on in session one, we heard from Dunya Habash.  Dunya is a musician, filmmaker and refugee advocate.  She spoke about visiting the Zaatari Syrian Refugee Camp in Jordan and how it challenged her notions of what it meant to be a refugee.  Her talk was probably the most moving moment of the day, doing more to show the resilience, ingenuity and humanity of displaced persons than any media report I have ever seen.  So much of what we hear and see about refugees is at best incomplete, at worst deliberately misleading.  She also shared her thoughts on the way refugees are portrayed, driving home the point that consumer friendly media is failing us in ways we don’t even understand.  The important stories are so much deeper than a Facebook post, so much more complex than a soundbite, and we owe them our full attention.

Unsurprisingly for an event connected in so many ways to UAB, many of the talks had a medical or public health angle.  Dr. Michael Saag challenged our assumptions about eradicating hepatitis C.  He ended with a concrete call to action to help us make eradication a reality by reducing the price of lifesaving drugs.  I was a bit perplexed that he called out drug advertising as a major cost contributor, but never mentioned lobbying against it in his call.  Dr. Julian Maha made a compelling case for expanding the way we address disability to include those invisible, sensory disabilities.  Most interesting to me is how he positioned his story as a set of inclusive, reasonably easy to implement changes that can open the world to those with sensory challenges.  Dr. Jayme Locke educated the crowd on the human cost of kidney transplant waiting lists and an innovative matchmaking network to improve the chances of donors and those needing a living donor getting connected.  Will Wright unpacked the true cost of loneliness on health and challenged all of us to reach out to somebody that needs a friendly ear.

We also heard from some people working hard to change the social fabric of Birmingham.  Honestly, social issues aren’t usually my focus area, but these sessions were some of the most moving and eye opening.  Maacah Davis coined what may be my new favorite phrase for creating in a space full of constrictive (and often harmful) assumptions:  “Artistic Claustrophobia”.  Lara Avsar deconstructed how harmful both princess fantasies and Wonder Woman stereotypes can be to a young woman (especially poignant as I figure out how to raise my own daughter), and pointed to a better way in which the struggle, failure and resiliency are core values.  Diedre Clark introduced us all to concept of Kuumba (the idea of leaving our community more beautiful and beneficial than we found it) and her innovative community arts program by the same name.  Anne Wright took us on a heartbreaking journey through running a program for homeless men, and encouraged us to use our own hopelessness as common ground, a gateway from sympathy to empathy.

It wouldn’t be a TED conference without some representation from education, which Randall Woodfin filled in nicely.  Mr. Woodfin has been a fixture in education in the city for some time, a steady, moderate and thoughtful voice for the children of Birmingham.  Oddly absent from his introduction was the fact that he has declared his candidacy for mayor, but perhaps the TEDxBirmingham organizers felt it was best to keep politics on the sideline.  Either way it was great to hear from our mayoral candidate about his views on education, community and how the two have to work hand in hand for the best outcomes.  Elizabeth Bevan rounded out the day, sharing her passion for sea turtles, her job and her research.  She gave us a glimpse into the present and future of researching animals in their natural environments with drones.  Drones are a relatively new addition to the field biologist’s toolkit, but are already allowing researchers to observe behaviors never seen before.

The Interactive Sessions

I’ve been to a LOT of conferences, usually attending 3-4 a year on a variety of topics.  I also present regularly on system architecture, content management, document security and many other topics.  I love the TED format and wish more conferences would structure themselves in a similar way, maybe we’d have less “death by powerpoint”.  Alternating between sit down sessions and walking around for some more interactive session time keeps the blood flowing, and gives you an opportunity to discuss what you’ve just heard with the other conference attendees.  It also gives you a chance to ask follow up questions of the speakers, who usually had a nice crowd around them during every interactive session.  Probably my favorite bit from the interactive sessions were the artists that were making art on site that reflected what was shared in the talks.  My only complaint was that I didn’t get a chance to try out the VR/AR gear, it always had quite a line.

The Event in General

In a word, it was professional.  Start to finish.  The crowds were well managed.  There were a ton of ambassadors around if you had questions and every one of them had a smile.  The check in was fast.  Lunch was easy.  There was always coffee (this is huge).  The presentations were well run.  It’s hard to believe that event was entirely put on by volunteers.  Unbelievably well done.  About the only thing I might change is the addition of a speaker or two that has a bit of a technology focus, but I’m in the industry so that’s my bias showing.  With AI and machine learning reshaping the world, and digital transformation completely rewiring companies from the ground up or disrupting them entirely there is a lot of ground to cover there.

I can’t wait for next year.  If you live in Birmingham and want to see how inspired and inspiring your city can be, you need to be there too.

Image credit:  tedxbirmingham.org.  Hope y’all don’t mind that I borrowed it.

Content Services is in Alfresco’s DNA

I’m spending this week at Alfresco’s Sales Kickoff in Chicago, and having a blast.  There’s a lot of energy across the company about our new Digital Business Platform, and it’s great to see how many people instantly and intuitively get how Alfresco’s platform fits into a customer’s digital transformation strategy.  When content and process converge, and when we provide a world class platform for managing and exposing both as a service it’s a pretty easy case to make.  We have some great customer stories to drive the point home too.  It’s one thing to talk about a digital strategy and how we can play there, but it’s another thing entirely to see it happen.

Content management is undergoing a shift in thinking.  Analysts have declared that ECM is dead, and content services is a better way to describe the market.  For my part, I think they are right.  Companies ahead of the curve have been using Alfresco as a content services platform for a long time.  I decided to do a little digging and see when Alfresco first added a web API to our content platform.  A quick look through some of our internal systems shows that Alfresco had working web services for content all the way back in 2006.  It was probably there earlier than that, but that’s one of the earliest references I could easily find in our systems.  That’s over a decade of delivering open source content services.  Here’s a quick view of the history of content services delivery channels in the product.

API History

I don’t think any other company in the market space formerly known as ECM can say that they have been as consistently service enabled for as long as Alfresco.  It’s great to see the market going to where we have been all along.

Open Source in an AI World. Open Matters More Now Than Ever.


Technological unemployment is about to become a really big problem.  I don’t think the impact of automation on jobs is in any doubt at this point, the remaining questions are mostly around magnitude and timeline.  How many jobs will be affected, and how fast will it happen?  One of the things that worries me the most is the inevitable consolidation of wealth that will come from automation.  When you have workers building a product or providing a service, a portion of the wealth generated by those activities always flows to the people that do the work.  You have to pay your people, provide them benefits, time off, etc.  Automation changes the game, and the people that control the automation are able to keep a much higher percentage of the wealth generated by their business.

When people talk about technological unemployment, they often talk about robots assuming roles that humans used to do.  Robots to build cars, to build houses, to drive trucks, to plant and harvest crops, etc.  This part of the automation equation is huge, but it isn’t the only way that technology is going to make some jobs obsolete.  Just as large (if not larger) are the more ethereal ways that AI will take on larger and more complex jobs that don’t need a physical embodiment.  Both of these things will affect employment, but they differ in one fundamental way:  Barrier to entry.

High barriers

Building robots requires large capital investments for machining, parts, raw materials and other physical things.  Buying robots from a vendor frees you from the barriers of building, but you still need the capital to purchase them as well as an expensive physical facility in which you can deploy them.  They need ongoing physical maintenance, which means staff where the robots are (at least until robots can do maintenance on each other).  You need logistics and supply chain for getting raw materials into your plant and finished goods out.  This means that the financial barrier to entry for starting a business using robots is still quite high.  In many ways this isn’t so different from starting a physical business today.  If you want to start a restaurant you need a building with a kitchen, registers, raw materials, etc.  The difference is that you can make a one time up-front investment in automation in exchange for a lower ongoing cost in staff.  Physical robots are also not terribly elastic.  If you plan to build an automated physical business, you need to provision enough automation to handle your peak loads. This means idle capacity when you aren’t doing enough business to keep your machines busy.  You can’t just cut a machine’s hours and reduce operating costs in the same way you can with people.  There are strategies for dealing with this like there are in human-run facilities, but that’s beyond the scope of this article.

Low barriers

At the other end of the automation spectrum is AI without a physical embodiment.  I’ve been unable to find an agreed upon term for this concept of a “bodiless” AI.  Discorporate AI?  Nonmaterial AI?  The important point is that this category includes automation that isn’t a physical robot.  Whatever you want to call it, a significant amount of technological unemployment will come from this category of automation.  AI that is an expert in a given domain will be able to provide meaningful work delivered through existing channels like the web, mobile devices, voice assistants like Alexa or Google Home, IoT devices, etc.  While you still need somewhere for the AI to run, it can be run on commodity computing resources from any number of cloud providers or on your own hardware.  Because it is simply applied compute capacity, it is easier to scale up or down based on demand, helping to control costs during times of low usage.  Most AI relies on large data sets, which means storage, but storage costs continue to plummet to varying degrees depending on your performance, retrieval time, durability and other requirements.  In short, the barrier to entry for this type of automation is much lower.  It takes a factory and a huge team to build a complete market-ready self driving car.  You can build an AI to analyze data and provide insights in a small domain with a handful of skilled people working remotely.  Generally speaking, the capital investment will be smaller, and thus the barrier to entry is lower.

Open source democratizes AI

I don’t want to leave you with the impression that AI is easy.  It isn’t.  The biggest players in technology have struggled with it for decades.  Many of the hardest problems are yet to be solved.  On the individual level, anybody that has tried Siri, or Google Assistant or Alexa can attest to the fact that while these devices are a huge step forward, they get a LOT wrong.  Siri, for example, was never able to respond correctly when I asked it to play a specific genre of music.  This is a task that a 10 year old human can do with ease.  It still requires a lot of human smarts to build out fairly basic machine intelligence.

Why does open source matter more now than ever?  That was the title of this post, after all, and it’s taking an awfully long time to get to the point.  The short version is that open source AI technologies further lower the barriers to entry for the second category of automation described above.  This is a Good Thing because it means that the wealth created by automation can be spread across more people, not just those that have the capital to build physical robots.  It opens the door for more participation in the AI economy, instead of restricting it to a few companies with deep pockets.

Whoever controls automation controls the future of the economy, and open source puts that control in the hands of more people.

Thankfully, most areas of AI are already heavily colonized by open source technologies.  I’m not going to put together a list here, Google can find you more comprehensive answers.  Machine learning / deep learning, natural language processing, and speech recognition and synthesis all have robust open source tools supporting them.  Most of the foundational technologies underpinning these advancements are also open source.  The mots popular languages for doing AI research are open.  The big data and analytics technologies used for AI are open (mostly).  Even robotics and IoT have open platforms available.  What this means is that the tools for using AI for automation are available to anybody with the right skills to use them and a good idea for how to apply them.  I’m hopeful that this will lead to broad participation in the AI boom, and will help mitigate to a small degree the trend toward wealth consolidation that will come from automation.  It is less a silver bullet, more of a silver lining.

Image Credit: By Johannes Spielhagen, Bamberg, Germany [CC BY-SA 3.0], via Wikimedia Commons

A Brief History of Screwing Up Software


Unless you live in a cave you have heard by now about the recent massive AWS outage and how it kind of broke the Internet for a lot of people.  Amazon posted an account of what went wrong, and the root cause is the sort of thing that makes you cringe.  One typo in one command was all it took to take a huge number of customers and sites offline.  If you have been a software developer or administrator or in some other way have had your hands on important production systems you can’t help but feel some sympathy for the person responsible for the AWS outage.  Leaving aside the wisdom of giving one person the power and responsibility for such a thing, I think we have all lived in fear of that moment.  We’ve all done our fair share of dumb things during our tech careers.  In the interest of commiserating with that poor AWS engineer, here are some of the dumbest things I’ve done during my life in tech:

  1. Added four more layers of duct tape to the “infrastructure” that holds the internet together with several bad routing table choices.
  2. Had my personal site hacked and turned into a spam spewing menace.  Twice.  Pay attention to those Joomla and Drupal security advisories folks, those that would do you harm sure do!
  3. Relied on turning it off and then back on again to fix a deadlock I couldn’t find a root cause for.  Embedded systems watchdog FTW.
  4. Wrote my own implementation of an HTTP server.  I recommend everybody do this at least once just so you can see how good you have it.  Mine ended up being vulnerable to a directory traversal attack.  Thankfully a friend caught it before somebody evil did.
  5. Used VB6 for a real project that ended up serving 100x as many users as it was intended to.  Actually, let’s just expand that to “used VB6”.
  6. Done many “clever” things in my projects that came back to bite me later.  Nothing like writing code and then finding out a year later that you can’t understand what you did.  Protip:  Don’t try to be clever, be clear instead.
  7. Ran a query with a bad join that returned a cartesian product.  On a production database that was already underpowered.  With several million rows in each table.
  8. Ran another query that inadvertently updated every row in a huge table when I only actually needed to update a handful.  Where’s that WHERE clause again?  Backups to the rescue!

Anybody that spends decades monkeying around with servers and code will have their own list just like this one.  If they tell you they don’t they are either too arrogant to realize it or are lying to you.  I’m happy to say that I learned something valuable from each and every example above and it’s made me better at my job.

What’s your most memorable mistake?

Technological Unemployment in Alabama


Much has been written about the effect that technology has on jobs.  While some (myself included) have seen our careers benefit immensely from the march of technological progress, many people have not been so fortunate.  I grew up in the Detroit area from the late 70s to the early 90s and even back then I remember people talking about how many workers were going to lose their jobs to machines.  I was young at the time, but the fear and uncertainty was palpable and was a regular topic of discussion in households connected to the automotive industry.  Companies rejoiced at the prospect of workers that never took a day off and did their jobs consistently, correctly and without complaint.  Human workers had a different view.  The robots were coming, an unstoppable mechanical menace that would decimate employment.

Robots vs. “Robots”

Fast forward to the present day.  In the words of the immortal Yogi Berra, it feels like “deja vu all over again”.  Thought leaders around the world are talking about the effect of automation on jobs, and the scope is orders of magnitude larger than it was before.  The rise of AI, machine learning, natural language processing, big data, cheap sensors and compute power and other technologies has expanded the scope of technological unemployment well past the assembly line.  Even jobs that were considered safe ten years ago are now witnessing at the very least a sea change in how they are done, if not outright extinction at the hands of technology.  Retail jobs face threats from cashier-free stores like Amazon Go.  Truck drivers are watching helplessly as Otto completed their first driverless delivery.  Personal drivers such as chauffeurs, taxi drivers and ride share drivers are nervous about self driving car technologies from Uber, Waymo, Ford, GM and others.  Projects like the open-source Farmbot offer a glimpse into the future of farming where seeding, watering and weeding tasks are carried out completely by machine.  Even previously labor intensive tasks related to harvesting are being automated.  Timber and logging are on the block as well, with robots being deployed to take down trees and handle post-harvest processing.  Construction is facing automation too.  We already have robots that can lay bricks faster than a human worker, and companies around the world are experimenting with 3D printing entire structures.

While the physical manifestations of technology may be the most visible, they are not the only way technology is changing the face of work.  When people hear the word “robot” they usually think of a physical machine.  A Terminator.  An industrial robot that welds car frames.  A Roomba.  A self driving car.  That’s only part of the story.  AI doesn’t need a physical body, it can live on the internet, basically invisible until you interact with it, and still do jobs a human does today.  We are already seeing this in many spaces from healthcare supplemented by IBM Watson, to fully automated AI driven translation services that in the future will handle language tasks that people do today.  In short, this isn’t just a “blue collar” issue, it’s much bigger than that.  Consider what happens when you can just ask Alexa, and don’t need to call a human expert for help with a problem.

Sweet Home Unemployment

What does this have to with Alabama, specifically?  I had not really given much thought to the specific impact on the southeast until I first read about Futureproof Bama.  These folks have a mission “to help the state of Alabama prepare for the transition period for when robotics, artificial intelligence, and other autonomous innovations will make most traditional work obsolete.“.  The point was really driven home for me when I had the chance to sit down and chat with Taylor Phillips at a recent Awesome Foundation Awesome Hour event and talk automation, jobs and the future of work.  Why is this of particular concern to Alabama?  Will we be impacted more than other areas of the country?  Well, let’s take a look.

According to usawage.com (which calculates their numbers from BLS data) there are about 1.85 million Alabamians employed as of 2015.  3.7% of them are retail cashiers.  2% are freight, stock or material movers.  1.66% are heavy truck drivers.  Light truck, delivery, industrial truck and tractor operators combine to add about another 1.1%.  Tellers and counter clerks are another 1% or so.  Various construction trades are another couple of points, as is agriculture.  I could go on, but you can read the list for yourself.  It isn’t hard to tally up the affected professions on this list and get to 20-30% of the population that is squarely in the crosshairs of automation in the short term.  To put this in perspective, unemployment during The Great Depression peaked at around 25%.  Will Alabama fare worse than other areas?  I don’t know, an extensive analysis of affected employment categories across states would be necessary to answer that question with any kind of certainty.  Regardless of how Alabama stacks up to other states, it’s clear that we face a period of great change in how we view work.

Sounds grim, but you might ask “Can’t these folks just find another job?”.  Maybe, maybe not.  If the entire job category disappears, then no, at least not in their chosen field.  Even if a job doesn’t disappear entirely, a reduction in the number of available positions in a particular industry combined with a steady or increasing number of people that want that job will result in a serious downward pressure on salary.  Good for employers, bad for employees.  At a minimum, we’ll have a huge number of people that will need to retrain into a different job.  Even then, it’s unlikely that we’ll have enough labor demand across industries to soak up the excess supply.  All of this adds up to a very big problem that will require an equally big solution.  We cannot legislate automation out of existence, nor can we simply ignore the people that are left out of work.

If this concerns you, and you want to join a group of people that are working hard to get Alabama ahead of the curve, pop over to Futureproof Bama and get involved.

My Favorite New Things in the Alfresco Digital Business Platform


Everybody inside Alfresco has been busy getting ready for today’s launch of our new version, new branding, new web site, updated services and everything that comes along with it.  Today was a huge day for the company, with the release of Alfresco Content Services 5.2, a shiny new version of Alfresco Governance Services, our desktop sync client, the Alfresco Content Connector for Salesforce, a limited availability release of the Alfresco App Dev Framework and refreshes of other products such as our analytics solution, media management and AWS AMIs / Quickstarts.  Here are a few of my favorite bits from today’s releases (in no particular order).

The new REST API

Alfresco has always had a great web API, both the core REST API that was useful for interacting with Alfresco objects, and the open standards CMIS API for interacting with content.  Alfresco Content Services 5.2 takes this to the next level with a brand new set of APIs for working directly with nodes, versions, renditions and running search queries.  Not only is there a new API, but it is easier than ever to explore what the API has to offer via the API Explorer.  We also host a version of the API explorer so you can take a look without having to set up an Alfresco instance.  The new REST API is versioned, so you can build applications against it without worry that something will change in the future and break your code.  This new REST API was first released in the 5.2 Community version and is now available to Alfresco Enterprise customers.  The API is also a key component of the Alfresco App Development Framework, or ADF.  Like all previous releases, you can still extend the API to suit your needs via web scripts.

Alfresco Search Services

Alfresco Content Services 5.2 comes with a whole new search implementation called Alfresco Search Services.  This service is based on Solr 6, and brings a huge number of search improvements to the Alfresco platform.  Search term highlighting, indexing of multiple versions of a document, category faceting and multi-select facets and document fingerprinting are all now part of the Alfresco platform.  Sharding also gets some improvements and you can now shard your index by DBID, ACL, date, or any string property.  This is a big one for customers supporting multiple large, distinct user communities that may each have different search requirements.  Unlike previous releases of Alfresco, search is no longer bundled as a WAR file.  It is now its own standalone service.

The Alfresco App Dev Framework

Over the years there have been a number of ways to build what your users need on top of the Alfresco platform.  In the early days this was the Alfresco Explorer (now deprecated), built with JSF.  The Share UI was added to the mix later, allowing a more configurable UI with extension points based on Surf and YUI.  Both of these approaches required you to start with a UI that Alfresco created and modify it to suit your needs.  This works well for use cases that are somewhat close to what the OOTB UI was built for, or for problems that require minimal change to solve.  For example, both Explorer and Share made it pretty easy to add custom actions, forms, or to change what metadata was displayed.  However, the further you get from what Share was designed to do, the more difficult the customizations become.

What about those cases where you need something completely different?  What if you want to build your own user experience on top of Alfresco content and processes?  Many customers have done this by building our their own UI in any number of different technologies.  These customers asked us to make it easier, and we listened.  Enter the Alfresco App Dev Framework, or ADF.  The ADF is a set of Angular2 components that make it easier to build your own application on top of Alfresco services.  There’s much more to it than that, including dev tooling, test tooling and other things that accelerate your projects.  The ADF is big enough to really need its own series of articles, so may I suggest you hop over to the Alfresco Community site and take a look!  Note that the ADF is still in a limited availability release, but we have many customers that are already building incredible things with it.

Admin Improvements

A ton of people put in a tremendous amount of work to get Alfresco Content Services 5.2 out the door.  Two new features that I’ve been waiting for are included, courtesy of the Alfresco Community and Alfresco Support.  The first is the trashcan cleaner, which can automate the task of cleaning out the Alfresco deleted items collection.  This is based on the community extension that many of our customers have relied on for years.  The second is the Alfresco Support Tools component.  Support Tools gives you a whole new set of tools to help manage and troubleshoot your Alfresco deployment, including thread dumps, profiling and sampling, scheduled job and active session monitoring, and access to both viewing logs and changing log settings, all from the browser.  This is especially handy for those cases where admins might not have shell access to the box on which Alfresco is running or have JMX ports blocked.  There’s more as well, check out the 5.2 release notes for the full story.

The Name Change

Ok, so we changed the name of the product.  Big deal?  Maybe not to some people, but it is to me.  Alfresco One is now Alfresco Content Services.  Why does this matter?  For one, it more accurately reflects what we are, and what we want to be.  Alfresco has a great UI in Share, but it’s pretty narrowly focused on collaboration and records management use cases.  This represents a pretty big slice of the content management world, but it’s not what everybody needs.  Many of our largest and most successful customers use Alfresco primarily as a content services platform.  They already have their own front end applications that are tailor made for their business, either built in-house or bought from a vendor.  These customers need a powerful engine for creating, finding, transforming and managing content, and they have found it in Alfresco.  The name change also signals a shift in mindset at Alfresco.  We’re thinking bigger by thinking smaller.  This new release breaks down the platform into smaller, more manageable pieces.  Search Services, the Share UI, Content Services and Governance Services are all separate components that can be installed or not based on what you need.  This lets you build the platform you want, and lets our engineering teams iterate more quickly on each individual component.  Watch for this trend to continue.

I’m excited to be a part of such a vibrant community and company, and can’t wait to see what our customers, partners and others create with the new tools they have at their disposal.  The technology itself is cool, but what you all do with it is what really matters.