Spinning up the SmarterBham project at Code for Birmingham

codeforbirmingham-logo

There are few things that get my inner geek as excited as the intersection of technology and the public sphere.  We have only begun to scratch the surface of the ways that technology can improve governance, and the ways we can put the power to transform the places people live in the hands of those people themselves.  This sort of civic hacking has been promoted by groups like Code for America for some time.  Code for America is loosely organized into “brigades” that service a particular city.  These independent units operate all over the US, and have gone worldwide.  Like any town worth its salt, Birmingham has its own brigade.  I first became aware of it back in 2015, attended a few meetings and then it fell of my radar.  The group produced a lot of valuable work, including an app for spotting and reporting potholes, contributions to open data policies, traffic accident analysis.

For about a year now I’ve grown increasingly interested in building IoT devices for monitoring various aspects of city life.  My first project was an air quality monitor (which is still up and running!).  At the same time I got interested in The Things Network and other ways citizens can participate and own the rollout of IoT projects at scale.  The price of technology has dropped so far and connectivity has become so ubiquitous that it is entirely feasible for a group of dedicated people to roll out their own IoT solutions with minimal monetary investment.

When these two things collided, something started happening.  Some of the folks at Code for Birmingham got excited.  I got excited.  Community partners got excited.  We made a plan.  We designed some things.  We ordered parts.  We started coding.  We made a pitch deck (because of course you need a pitch deck).  We applied for grants.  We built a team.  A couple months down the road we’re making serious progress.  One of our team members has made huge strides in building a prototype.  Another has started on our AWS templates.  We’re getting there.

Take a look at what we’re building and if you want to be a part of something awesome, get in touch.  We need designers, coders, CAD gurus, testers, writers, data wizards, and of course, some dreamers.  All are welcome.

 

(Possibly) Enhancing Alfresco Search Part 2 – Google Cloud’s Natural Language API

google-nl-alfresco-search

In the first article in this series, we took a look at using Stanford’s CoreNLP library to enrich Alfresco Content Services metadata with some natural language processing tools.  In particular, we looked at using named entity extraction and sentiment analysis to add some value to enterprise search.  As soon as I posted that article, several people got in touch to see if I was working on testing out any other NLP tools.  In part 2 of this series, we’ll take a look at Google Cloud’s Natural Language API to see if it is any easier to integrate and scale, and do a brief comparison of the results.

One little thing I discovered during testing that may be of note if anybody picks up the Github code to try to do anything useful with it:  Alfresco and Google Cloud’s Natural Language API library can’t play nice together due to conflicting dependencies on some of the Google components.  In particular, Guava is a blocker.  Alfresco ships with and depends on an older version.  Complicating matters further, the Guava APIs changed between the version Alfresco ships and the version that the Google Cloud Natural Language API library requires so it isn’t as straightforward as grabbing the newer Guava library and swapping it out.  I have already had a quick chat with Alfresco Engineering and it looks like this is on the list to be solved soon.  In the meantime, I’m using Apache HttpClient to access the relevant services directly.  It’s not quite as nice as the idiomatic approach that the Google Cloud SDK takes, but it will do for now.

Metadata Enrichment and Extraction

The main purpose of these little experiments has been to assess how suitable each tool may be for using NLP to improve search.  This is where, I think, Google’s Natural Language product could really shine.  Google is, after all, a search company (and yes, more than that too).  Google’s entity analyzer not only plucks out all of the named entities, but it also returns a salience score for each.  The higher the score, the more important or central that entity is to the entire text.  The API also returns the number of proper noun mentions for that entity.  This seems to work quite well, and the salience score isn’t looking at just the number of mentions.  During my testing I found several instances where the most salient result was not that which was mentioned the most.  Sorting by salience and only making those most relevant mentions searchable metadata in Alfresco would be useful.  Say, for example, we are looking for documents about XYZ Corporation.  A simple keyword search would return every document that mentions that company, even if the document wasn’t actually about it.  Searching only those documents where XYZ Corporation is the most salient entity (even if not the most frequently mentioned) in the document would give us much more relevant results.

Sentiment analysis is another common feature in many natural language processing suites that may be useful in a context services search context.  For example, if you are using your content services platform to store customer survey results, transcripts of chats or other documents that capture an interaction you might want to find those that were strongly negative or positive to serve as training examples.  Another great use case exists in the process services world, where processes are likely to capture interactions in a more direct fashion.  Sentiment analysis is an area where Google’s and CoreNLP’s approaches differ significantly.  The Google Natural Language API provides two ways to handle sentiment analysis.  The first analyzes the overall sentiment of the provided text, the second provides sentiment analysis related to identified entities within the text.  These are fairly simplistic compared with the full sentiment graph that CoreNLP generates.  Google ranks sentiment along a scale of -1 to 1, with -1 being the most negative, and 1 the most positive.

Lower Level Features

At the core of any NLP tool are the basics of language parsing and processing such as tokenization, sentence splitting, part of speech tagging, lemmatization, dependency parsing, etc.  The Google Cloud NL API exposes all of these features through its syntax analysis API and the token object.  The object syntax is clear and easy to understand.  There are some important differences in the way these are implemented across CoreNLP and Google Cloud NL, which I may explore further in a future article.

Different Needs, Different Tools

Google Cloud’s Natural Language product differs from CoreNLP in some important ways.  The biggest is simply the fact that one is a cloud service and one is traditionally released software.  This has its pros and cons, of course.  If you roll your own NLP infrastructure with CoreNLP (whether you do it on-premises or in the cloud) you’ll certainly have more control but you’ll also be responsible for managing the thing.  For some use cases this might be the critical difference.  Best I can tell, Google doesn’t allow for custom models or annotators (yet).  If you need to train your own system or build custom stuff into the annotation pipeline, Google’s NLP offering may not work for you.  This is likely to be a shortcoming of many of the cloud based NLP services.

Another key difference is language support.  CoreNLP ships models for English, Arabic, Chinese, French, German and Spanish, but not all annotators work for all languages.  CoreNLP also has contributed models in other languages of varying completeness and quality.  Google Cloud’s NLP API has full fledged support for English, Japanese and Spanish, with beta support for Chinese (simplified and traditional), French, German, Italian, Korean and Portuguese.  Depending on where you are and what you need to analyze, language support alone may drive your choice.

On the feature front there are also some key differences when you compare “out of the box” CoreNLP with the Google Cloud NL API.  The first thing I tested was entity recognition.  I have been doing a little testing with a collection of short stories from American writers, and so far both seem to do a fair job of recognizing basic named entities like people, places, organizations, etc.  Google’s API goes further though and will recognize and tag things like the names of consumer goods, works of art, and events.  CoreNLP would take more work to do that sort of thing, it isn’t handled by the models that ship with the code.  On sentiment analysis, CoreNLP is much more comprehensive (at least in my admittedly limited evaluation).

Scalability and ergonomics are also concerns. If you plan to analyze a large amount of content there’s no getting around scale.  Without question, Google wins, but at a cost.  The Cloud Natural Language API uses a typical utilization cost model.  The more you analyze, the more you pay.  Ergonomics is another area where Google Cloud NL has a clear advantage.  CoreNLP is a more feature rich experience, and that shows in the model it returns.  Google Cloud NL API just returns a logically structured JSON object, making it much easier to read and interpret the results right away.  There’s also the issue of interface.  CoreNLP relies on a client library.  Google Cloud NL API is just a set of REST calls that follow the usual Google conventions and authentication schemes.  There has been some work to put a REST API on top of CoreNLP, but I have not tried that out.

The more I explore this space the more convinced I am that natural language processing has the potential to provide some significant improvements to enterprise content search, as well as to content and process analytics.

 

Branching The Blog Process

blog_fork_process

I work at Alfresco.  I also participate in the Alfresco community and build my own side projects / experiments / etc.  Some of these are Alfresco product related, some are not.  Sometimes this seems to introduce confusion around what is “official” Alfresco stuff related to my role and what is a science project or spike to explore an idea.  To avoid this confusion, I’m making a small change to the way I blog.  Going forward anything related to supported Alfresco platforms or functionality, troubleshooting, tuning, performance, etc will be hosted at the Alfresco Premier Services blog.  Other stuff related to experimentation, thoughts around content and process services management as a whole, embedded systems, etc will continue to get posted right here.  Hopefully this split will help to clarify which articles are related to the product as it is, and separate out the more exploratory stuff.

If you have not popped over to the Premier Team blog yet, check it out!

(Possibly) Enhancing Alfresco Search with Stanford CoreNLP

corenlp + alfresco

Laurence Hart recently published an article on CMSWiRE about AI and enterprise search that I found interesting.  In it, he lays out some good arguments about why the expectations for AI and enterprise search are a bit overinflated.  This is probably a natural part of they hype cycle that AI is currently traversing.  While AI probably won’t revolutionize enterprise search overnight, it definitely has the potential to offer meaningful improvements in the short term.  One of the areas where I think we can get some easy improvements is by using natural language processing to extract things that might be relevant to search, along with some context around those things.  For example, it is handy to be able to search for documents that contain references to people, places, organizations or specific dates using something more than a simple keyword search.  It’s useful for your search to know the difference between the china you set on your dinner table and China the country, or Alfresco the company vs eating outside.  Expanding on this work, it might also be useful to do some sentiment analysis on a document, or extract specific parts of it for automatic classification.

Stanford offers a set of tools to help with common natural language processing (NLP) tasks.  The Stanford CoreNLP project consists of a framework and variety of annotators that handle tasks such as sentiment analysis, part of speech tagging, lemmatization, named entity extraction, etc.  My favorite thing about this particular project is how they have simply dropped the barriers to trying it out to zero.  If you want to give the project a spin and see how it would annotate some text with the base models, Stanford helpfully hosts a version you can test out.  I spent an afternoon throwing text at it, both bits I wrote, and bits that come from some of my test document pool.  At first glance it seems to do a pretty good job, even with nothing more than the base models loaded.

I’d like to prove out some of these concepts and explore them further, so I’ve started a simple project to connect Stanford CoreNLP with the Alfresco Content Services platform.  The initial goals are simple:  Take text from an document stored in Alfresco, run it through a few CoreNLP annotators, extract data from the generated annotations, and store that data as Alfresco metadata.  This will make annotation data such as named entities (dates, places, people, organizations) directly searchable via Alfresco Search Services.  I’m starting with an Alfresco Repository Action that calls CoreNLP since that will be easy to test on individual documents.  It would be pretty straightforward to take this component and run it as a metadata extractor, which might make more sense in the long run.  Like most of my Alfresco extension or integration projects, this roughly follows the Service Action Pattern.

Stanford CoreNLP makes the integration bits pretty easy.  You can run CoreNLP as a standalone server, and the project helpfully provides a Java client (StandfordCoreNLPClient) that somewhat closely mirrors the annotation pipeline so if you already know how to use CoreNLP locally, you can easily get it working from an Alfresco integration.  This will also help with scalability since CoreNLP can be memory hungry and running the NLP engine in a separate JVM or server from Alfresco definitely makes sense.  It also makes sense to be judicious about what annotators you run, so that should be configurable in Alfresco.  It also make sense to limit the size of the text that gets sent to CoreNLP, so long term some pagination will probably be necessary to break down large files into more manageable pieces.  The CoreNLP project itself provides some great guidance on getting the best performance out of the tool.

A couple of notes about using CoreNLP programmatically from other applications.  First, if you just provide a host name (like localhost) then CoreNLP assumes that you will be connecting via HTTPS.   This will cause the StanfordCoreNLPClient to not respond if your server isn’t set up for it.  Oddly, it also doesn’t seem to throw any kind of useful exception, it just sort of, well, stops.  If you don’t want to use HTTPS, make sure to specify the protocol in the host name.  Second, Stanford makes it pretty easy to use CoreNLP in your application by publishing on Maven Central, but the model jars aren’t there.  You’ll need to download those separately.  Third, CoreNLP can use a lot of memory for processing large amounts of text.  If you plan to do this kind of thing at any kind of scale, you’ll need to run the CoreNLP bits on a separate JVM, and possibly a separate server.  I can’t imagine that Alfresco under load and CoreNLP in the same JVM would yield good results.  Fourth, the client also has hefty memory requirements.  In my testing, running CoreNLP client in an Alfresco action with less than 2GB of memory caused out of memory errors when processing 5-6 pages of dense text.  Finally, the pipeline that you feed CoreNLP is ordered.  If you don’t have the correct annotators in there in the right order, you won’t get the results you expect.  Some annotators have dependencies, which aren’t always clear until you try to process some text and it fails.  Thankfully the error message will tell you what other annotators you need in the pipeline for it to work.

After some experimentation I’m not sure that CoreNLP is really well suited for integration with a content services platform.  I had hoped that most of the processing using StanfordCoreNLPClient to connect to a server would take place on the server, and only results would be returned but that doesn’t appear to be the case.  I still think that using NLP tools to enhance search has merit though.  If you want to play around with this idea yourself you can find my PoC code on Github.  It’s a toy at this point, but might help others understand Alfresco, some intricacies of CoreNLP, or both.  As a next step I’m going to look at OpenNLP and a few other tools to better understand both the concepts and the space.

 

AWS Lambda and Alfresco – Connecting Serverless to Content and Process

lambda+alfresco

Let’s start with a rant.  I don’t like the term “Serverless” to describe Lambda or other function as a service platforms.  Yeah, OK, so you don’t need to spin up servers, or worry about EC2 instances, or any of that stuff.  Great.  But it still runs on a server of some sort.  Even nascent efforts to extend “Serverless” to edge devices still have something that could be called a server, the device itself.  If it provides a service, it’s a server.  It’s like that Salesforce “No Software” campaign.  What?  It’s freaking software, no matter what some marketing campaign says.  It looks like the name is going to stick, so I’ll use it, but if you wonder why I look like I’ve just bit into a garden slug every time I say “Serverless”, that’s why.

Naming aside, there’s no doubt this is a powerful paradigm for writing, running and managing code.  For one, it’s simple.  It takes away all the pain of the lower levels of the stack and gives devs a superbly clean and easy environment.  It (should be) scalable.  It (should be) performant.  The appeal is easy to see.  Like most areas that AWS colonizes, Lambda seems to be the front runner in this space.

You know what else runs well in AWS?  Alfresco Content and Process Services.

Lambda -> Alfresco Content / Process Services

It should be fairly simple to call Alfresco Content or Process Services from AWS Lambda.  Lambda supports several execution environments, all of which support calling an external URL.  If you have an Alfresco instance running on or otherwise reachable from AWS, you can call it from Lambda.  This does, however, require you to write all of the supporting code to make the calls.  One Lambda execution environment is Node.js, which probably presents us with the easiest way to get Lambda talking to Alfresco.  Alfresco has a recently released Javascript client API which supports connections to both Alfresco Content Services and Alfresco Process Services.  This client API requires at least Node.js 5.x.  Lambda supports 6.10 at the time this article was written, so no problem there!

Alfresco Content / Process Services -> Lambda

While it’s incredibly useful to be able to call Alfresco services from Lambda, what about triggering Lambda functions from the Alfresco Digital Business Platform?  That part is also possible, exactly how to do it depends on what you want to do.  Lambda supports many ways to invoke a function, some of which may be helpful to us.

S3 bucket events

AWS Lambda functions can be triggered in response to S3 events such as object creation or deletion.  The case that AWS outlines on their web site is a situation where you might want to generate a thumbnail, which Alfresco already handles quite nicely, but it’s not hard to come up with others.  We might want to do something when a node is deleted from the S3 bucket by Alfresco.  For example, this could be used to trigger a notification that content cleanup was successful or to write out an additional audit entry to another system.  Since most Alfresco DBP deployments in AWS use S3 as a backing store, this is an option available to most AWS Alfresco Content or Process Services instances.

Simple Email Service

Another way to trigger a Lambda function is via the AWS Simple Email Service.  SES is probably more commonly used to send emails, but it can also receive them.  SES can invoke your Lambda function and pass it the email it received.  Sending email can already easily be done from both an Alfresco Process Services BPMN task and from an Alfresco Content Services Action, so this could be an easy way to trigger a Lambda function using existing functionality in response to a workflow event or something occurring in the content repository.

Scheduled Events

AWS CloudWatch provides a scheduled event capability for CloudWatch Events.  These are configured using either a fixed rate or a cron expression, and use a rule target definition to define which Lambda function to call.  A scheduled event isn’t really a way to call Lambda functions from ACS or APS, but it could prove to be very useful for regular cleanup events, archiving or other recurring tasks you wish to run against your Alfresco Content Services instances in AWS.  It also gives you a way to trigger things to happen in your APS workflows on a schedule, but that case is probably better handled in the process definition itself.

API Gateway

Our last two options would require a little work, but may turn out to be the best for most use cases.  Using an API Gateway you can define URLs that can be used to directly trigger your Lambda functions.  Triggering these from Alfresco Process Services is simple, just use a REST call task to make the call.  Doing so from Alfresco Content Services is a bit trickier, requiring either a custom action or a behavior that makes the call out to the API gateway and passes it the info your Lambda function needs to do its job.  Still fairly straightforward, and there are lots of good examples of making HTTP calls from Alfresco Content Services extensions out there in the community.

SNS

AWS Simple Notification Service provides another scalable option for calling Lambda functions.  Like the API gateway option, you could use a REST call task in Alfresco Process Services, or a bit of custom code to make the call from Alfresco Content Services.  AWS SNS supports a simple API for publishing messages to a topic, which can then be used to trigger your Lambda function.

There are quite a few ways to both use Alfresco Process and Content services from Lambda functions, as well use Lambda functions to enhance your investment in Alfresco technologies.  I plan to do a little spike to explore this further, stay tuned for findings and code samples!

 

13 Essentials for Building Great Remote Teams

alfresco_support

It’s been a while since I wrote a listicle, and this stuff has been on my mind a lot lately.  About two years ago I assumed my current role as Alfresco’s Global Director of Premier Support Services.  The Premier Services team is scattered across the world, with team members from Sydney to Paris and just about everywhere in between.  This has been my first time running a large distributed team, here are some things I’ve found essential to making it work.  Some are things you need to have, some are things you need to do:

  1. Find and use a good chat tool.  When your team is spread around the world you can’t live without a good tool for informal asynchronous communications.
  2. But not too many chat tools.  Seriously, this is a problem.  Ask you team what they like, settle on one and stick with it, otherwise you end up with silos, missed messages and a confused group of people.
  3. Use video, even if it feels weird.  Voice chat is great, but there’s no substitute for seeing who you are talking with.  In his book “Silent Messages“, Dr. Albert Mehrabian attributes up to 55% of the impact of a message to the body language of the person presenting the message.  You can’t get that from voice chat alone.
  4. Take advantage of document sharing and collaboration.  A big percentage of our work results in unstructured content in the form of spreadsheets, reports, etc.  We need easy ways to find, collaborate on and share that stuff.  We use Alfresco, naturally.
  5. Have regular face-to-face meetings.  These can be expensive and time consuming, but there is no substitute for meeting in person, shaking hands and sharing a cup of coffee or lunch.  This is especially true for new additions to the crew, during that honeymoon period you need to meet.
  6. Make smart development investments.  When most of your team is remote it is easy to start to feel disconnected from your organization.  Over 5 years of working remotely both as an individual contributor and a leader I know I have felt that way from time to time.  Investing in your team’s professional development is a great way to help them reconnect.  It’s even better if you can use this as an opportunity to get some face time, for example by sending a couple of people from your team to the same conference so they can get to know each other.
  7. Celebrate success, no matter how small.  When everybody works together under one roof it’s easy to congratulate somebody on your team for a job well done.  It’s easy to pull the team together to celebrate a release, or a project milestone, or whatever.  When everybody is remote that becomes simultaneously harder and more important.  Don’t be shy about calling somebody out in your chat, via email or in a team call when they score a win.  Think to yourself “Is this the kind of thing I would walk over and thank somebody for in person?”.  If the answer is yes, then mention it in public.
  8. Raise your team’s profile.  It has been said that a boss takes credit, a leader gives it.  When your entire team is spread around the globe you, as their lead, serve to a certain extent as their interface to upper management and to leaders in other areas of the company.  Use this to your team’s benefit by raising the profile of your top performers to your leadership and to your peers.  When you bring a good idea from your staff to your leadership, your team or anybody else in your company, make sure you let them know exactly where it came from.
  9. Build lots of bridges.  A lot of these essentials come back to the risk of a remote team member becoming disconnected or disengaged.  One way to prevent this is to help your team get and stay engaged in areas other than your own.  Every company I have ever worked for has cross functional teams and initiatives.  Find the places where your teams’ skills and bandwidth align with those cross functional needs and get them connected.  They’ll learn something new, share what they know and contribute to the success of the team.
  10. Shamelessly signal boost.  Many people on my team are active on social media, or with our team blog, or on other channels for knowledge sharing and engagement.  I absolutely encourage this, sharing our knowledge with peers, customers, partners and community members helps everybody.  It takes effort though, an effort that often goes beyond somebody’s core job role.  It’s also a bit scary at times, putting yourself and your ideas out there for everybody to see (and potentially criticize).  If somebody on your team takes the time and the risk, help boost them a bit.  Retweet their post, share it on LinkedIn, post it to internal chats, etc.  Not only will you be helping them spread the knowledge around, but you’re also lending your credibility to their message.
  11. Have defined development paths within (and out of!) your team.  A lot of promotions come from networking, cross functional experiences, word of mouth and other things that are harder to achieve when you work remotely.  As a leader of a remote team, it’s your responsibility to help your people understand the roles within your organization, what is required to move into those roles, and how to get there.  It’s also your job to make sure they know about great opportunities outside your team.  I want my people to be successful, however they define success.  That might be in my team or it might be elsewhere in the company.
  12. Be clear about your goals and how you’ll measure them.  If you have the sort of job that lets you work from home, odds are you aren’t punching a clock.  If you do come into an office every day but your boss is elsewhere 95% of the time, nobody is hovering around making sure you’re there.  Work should be a thing you do, not necessarily a place you go.  The only way this works is if everybody is clear about what we are all trying to achieve together, who’s responsible for what, and how we’ll measure the outcome.  If we all agree on that, you can work from the moon if your internet connection is fast enough.
  13. Trust.  I put this one last because it is easily the most important.  It’s important from the moment you hire somebody that you won’t see in person every day.  Put simply, you cannot possibly have a successful remote organization if you don’t trust the people you work with.  Full stop.  You have to nurture a culture of trust where people aren’t afraid to speak up, where transparency is a core value.

Is this list comprehensive?  Of course not.  Do I still struggle to do this stuff?  Every day, but I keep trying.

A Simple Pattern for Alfresco Extensions

Over the years I have worked with and for Alfresco, I have written a ton of Alfresco extensions.  Some of these are for customers, some are for my own education, some for R&D spikes, etc.  I’d like to share a common pattern that comes in handy.  If you are a super experienced Alfresco developer, this article probably isn’t for you.  You know this stuff already!

There are a lot of ways to build Alfresco extensions, and a lot of ways to integrate your own code or connect Alfresco to another product.  There are also a lot of ways you might want to call your own code or an integration, whether that is from an Action, a Behavior, a Web Script, a scheduled job, or via the Alfresco Javascript API.  One way to make your extension as flexible as possible is to use what could informally be called the “Service Action Pattern”.

The Service Action Pattern

service_action_pattern_sequence (1)

Let’s start by describing the Service Action Pattern.  In this pattern, we take the functionality that we want to make available to Alfresco and we wrap it in a service object.  This is a well established pattern in the Alfresco world, used extensively in Alfresco’s own public API.  Things like the NodeService, ActionService, ContentService, etc all take core functionality found in the Alfresco platform and wrap it in a well defined service interface consisting of a set of public APIs that return Alfresco objects like NodeRefs, Actions, Paths, etc, or Java primitives.  The service object is where all of our custom logic lives, and it provides a well defined interface for other objects to use.  In many ways the service object serves as a sort of adapter pattern in that we are using the service object to translate back and forth between the types of domain specific objects that your extension requires and Alfresco objects.  When designing a new service in Alfresco, I find it is a best practice to limit the types of objects returned by the service layer to those things that Alfresco natively understands.  If your service object method creates a new node, return a NodeRef, for example.

A custom service object on its own isn’t terribly useful, since Alfresco doesn’t know what to do with it.  This is where an Alfresco Action comes in handy.  We can use one or more Alfresco Actions to call the services that our service object exposes.  Creating an action to call the service object has several advantages.  First, once you have an Action you can easily call that Action (and thus the underlying service object) from the Javascript API (more on this in a moment).  Second, it is easy to take an Action and surface it in Alfresco Share for testing or so your users can call it directly.  Actions can also be triggered by folder rules, which can be useful if you need to call some code when a document is created or updated.  Finally, Actions are registered with Alfresco, which makes them easy to find and call from other Java or server side Javascript code via the ActionService.  If you want to do something to a file or folder in Alfresco there is a pretty good chance that an Action is the right approach.

Using the Service Action Pattern also makes it simple to expose your service object as a REST API.  Remember that Alfresco Actions can be located and called easily from the Javascript API.  The Javascript API also happens to be (IMHO) the simplest way to build a new Alfresco Web Script.  If you need to call your Action from another system (a very common requirement) you can simply create a web script that exposes your action as a URL and call away.  This does require a bit of boilerplate code to grab request parameters and pass them to the Action, which in turn will call your service object.  It isn’t too much and there are lots of great examples in the Alfresco documentation and out in the community.

So why not just bake the code into the Action itself?  Good question!  First, any project of some complexity is likely to have a group of related functionality.  A good example can be found in the AWS Glacier Archive for Alfresco project we built a couple years ago at an Alfresco hack-a-thon.  This project required us to have Actions for archiving content, initiating a retrieval, and retrieving content.  All of these Actions are logically and functionally related, so it makes sense to group them together in a single service.  If you want the details of how Alfresco integrates with AWS Glacier, you just have to look at the service implementation class, the Action classes themselves are just sanity checks and wiring.  Another good reason to put your logic into a service class is for reuse outside of Actions.  Actions carry some overhead, and depending on how you plan to use it you may want to make your logic available directly to a behavior or expose it to the Alfresco Javascript API via a root scope object.  Both of these are straightforward if you have a well defined service object.

I hope this helps you build your next awesome Alfresco platform extension, I have found it a useful way to implement and organize my Alfresco projects.

Alabama Cyber Now Conference Recap

2nd-annual-alabama-cyber-now-conference-66

Birmingham’s tech scene keeps getting better!  Yesterday was the second annual Alabama Cyber Now security conference hosted by TechBirmingham, the Central Alabama Chapter of the Information Systems Security Association (CAISSA) and the InfraGard Birmingham Members Alliance.  This conference brings together security and other technology professionals from across the southeast for a day of engaging talks, panels and networking, along with a sizable vendor hall.  Last year’s event was so successful that this year they had to move to a bigger venue.

I’m not a security specialist, but I do think that everybody that works in technology needs to have a working knowledge of core security concepts.  In my day job I work with technology leaders in all sorts of organizations from government agencies to brand marketing firms, from insurance and banking to healthcare.  Security topics come up all the time.  Single day events like the Alabama Cyber Now conference provide a great chance to learn from experts in the field about new threat types, some security best practices and to get a glimpse into how they look at securing their systems and those of their customers.  For those that need them the Alabama Cyber Now conference also provided a chance to earn some CPE credits, a requirement for maintaining CISSP and other security certifications.

The conference was generally very well run, with a few exceptions.  The check in lines got pretty long, leading to some delays and it looked like there were people still waiting to check in when the morning keynote was starting.  Maybe next year it would be helpful to do earlier check in or have more people doing it.  Coffee was hard to come by, I didn’t see any stations set up outside of the morning keynote and lunch.  Conferences run on coffee.  The badges had people’s names and organization printed in a pretty small font, which made it hard to just glance at somebody’s badge to see who they are and what org they were with.  But really, those are tiny little things that didn’t affect the overall conference experience.  The conference team did an awesome job putting on the event, and I’m grateful we had the opportunity to have such diverse group of speakers delivered right to our doorstep.

The Talks

The highlight of any event is usually the keynote(s), and this event was no exception.  Dave Shackleford was the morning keynote, talking about the challenges and opportunities in cloud security.  Dave brought up some great points and provided some guidance for how to integrate security into the DevOps processes, which he is terming DevSecOps.  Developers and ops won’t tolerate security slowing down their cycles, so security needs to find ways to automate and integrate with the cycles they are already executing.  Velocity means everything today for competitiveness.  If I recall correctly, Dave also declared the old idea of a “bullseye” security model with perimeters to be dead.  This was a theme that was repeated in several other talks in varying ways.

The second keynote is the one that I was really looking forward to.  Bruce Schneier is one of the best known and most respected voices in security.  Bruce lived up to his reputation, delivering a riveting talk about IoT security and how the internet of things completely changes the way we should be looking at security.  The stakes are much higher now, it’s not just data.  When everything has a computer in it software security becomes the security of everything.  We have essentially given the internet the ability to gather data from the physical world (sensors), to make decisions based on that data (compute / AI) and to affect the physical world (actuators).  Bruce described it as humanity inadvertently building “A world sized robot”.  Essentially, his argument is that we aren’t intentionally building this world sized thing, it’s an emergent property of hooking up billions of sensors and actuators to a global network.  Bruce ended his talk with a call for regulation to help us avoid the worst consequences of poor IoT security (like the Dyn attack).  This made several folks in the audience visibly uncomfortable and sparked a few questions during the Q&A session that followed the talk.

Speaking of the Dyn attack, another talk I really enjoyed was one in which we got to hear from Chris Baker at Dyn.  He gave a detailed timeline of the attack, how it worked and what strategies his team and others used to map and mitigate the attack, and how they are preparing for the next one.   The other talks I attended included how pen testers (and malicious actors) approach phishing, security analytics in a world where each device and user is its own “front” in a war between attackers and defenders, and a great talk on the role of privilege escalation and lateral movement in a breach.  The last one I listed was delivered by Andy Givens from CyberArk, and was one of my favorite breakouts.  I especially enjoyed the case studies where he walked through how the hackers got a foothold, and how they expanded from that initial landing.

Next Year

Assuming this event happens again next year, there are a few things I would love to see added.  A few “101” style talks would be great as an introduction to areas of security that one might not be familiar with.  I’d also like to see some developer focused talks.  Developers can always get better at building more secure applications, evaluating libraries for security concerns before they adopt them, etc.  Maybe a talk could be added for DevOps folks that integrates parts of Dave Shackleford’s DevSecOps model.

All in all, it was a good event, and one that I hope continues next year.  Thanks to all of the organizers, volunteers, sponsors and vendors that made it happen!

Living Behind the Curve – Technology Adoption in Alabama

Birmingham_curve2013

I’d like to preface this article by saying that it isn’t intended to bash Alabama.  I’ve lived most of my life here, and despite leaving several times to pursue job and educational opportunities, I keep coming back.  I love the south, and I want it to succeed.

These days I’m pretty hot on Birmingham.  We have a growing, passionate tech scene, the city center has sprung back to life and thanks to UAB there is a diverse influx of smart people moving here every day.  There’s a lot to like about living in Birmingham.  The cost of living remains low, the people are friendly, the weather is great and the food is expanding my waistline without draining my wallet.  However, Birmingham is a bit of a bubble.  It’s easy to get swept up in the spirit and energy of the city and forget about the context in which it exists.

Life Behind the Curve

Anecdotal evidence suggests that states like Alabama have systemic features that delay the adoption of innovations.  We look to other states and see things like ride sharing, solar power, deep broadband penetration, fiber networks or municipal wifi, open data / eGovernment initiatives, mobile payments, smart cities projects, technology startups and many other innovations and we wonder “Why not here?”.  It can probably be explained by several factors.

Everett Rogers originally proposed the “diffusion of innovations” theory, and along with it the notion that the diffusion of innovations roughly followed a bell curve distribution.  New ideas (or technology, in this case) follow a curve in which innovators and early adopters pick it up first, followed by early and late majorities, and finally by the laggards.

2000px-Diffusion_of_ideas.svg

Rogers describes the five groups thusly (Diffusion of Innovation, Rogers, 1962) :

Adopter category Definition
Innovators “Innovators are willing to take risks, have the highest social status, have financial liquidity, are social and have closest contact to scientific sources and interaction with other innovators. Their risk tolerance allows them to adopt technologies that may ultimately fail. Financial resources help absorb these failures.”
Early adopters “These individuals have the highest degree of opinion leadership among the adopter categories. Early adopters have a higher social status, financial liquidity, advanced education and are more socially forward than late adopters. They are more discreet in adoption choices than innovators. They use judicious choice of adoption to help them maintain a central communication position.”
Early Majority “They adopt an innovation after a varying degree of time that is significantly longer than the innovators and early adopters. Early Majority have above average social status, contact with early adopters and seldom hold positions of opinion leadership in a system.”
Late Majority “They adopt an innovation after the average participant. These individuals approach an innovation with a high degree of skepticism and after the majority of society has adopted the innovation. Late Majority are typically skeptical about an innovation, have below average social status, little financial liquidity, in contact with others in late majority and early majority and little opinion leadership.”
Laggards “They are the last to adopt an innovation. Unlike some of the previous categories, individuals in this category show little to no opinion leadership. These individuals typically have an aversion to change-agents. Laggards typically tend to be focused on “traditions”, lowest social status, lowest financial liquidity, oldest among adopters, and in contact with only family and close friends.”

Take a look at the words used to describe the innovators and early adopters.  Things like “financial resources”, “closest contact to scientific sources”, “advanced education” and “opinion leadership”.  Now look at the statistics about Alabama.  Alabama consistently sits near the bottom of rankings for educational quality and attainment.  We are one of the poorest states in America, ranking near the middle for GSP (Gross State Product) but close to the bottom for GSP per capita.  Our unemployment rate remains stubbornly high at 6.2% (3rd worst in the country) at the time this was written.  Alabama also has one of the highest income equality gaps (as measured by the Gini coefficient) in the United States.  Perhaps most worrisome is the fact that Alabama is one of the states that consistently loses college graduates year over year.  Statistically, one would expect to see fewer people that fit into the innovators or early adopter category in a state with Alabama’s economic and educational profile.

We can see similar trends in the way late adopters and laggards are described.  Words like “traditional”, “little financial liquidity” and “older” are used to describe folks on the trailing edge of the adoption curve.  Tradition and heritage are a big part of the culture in Alabama and according to the 2010 census Alabama ranks in the bottom 10 in terms of the percentage of the population that lives in urban areas.  Population density means more opportunity to be exposed to new ideas and to have a larger social circle.  Age also plays a role.  Alabama isn’t the oldest state in the union, but it is in the bottom half by average age.  Culturally and demographically, Alabama exhibits many characteristics that lean toward later adoption of innovation.

It seems that the Rogers’ theory explains in large part why we don’t see faster and deeper technology adoption in this state, but it doesn’t tell the whole story.

The Effect of Policy

The effect of public policy on innovation adoption is profound.  Several categories of adopter in Rogers’ model indirectly reference cost, usually in terms of an adopter’s financial resources.  It stands to reason that if the cost is higher for a given innovation then the financial resources required to adopt it will also be higher, thus reducing the pool of people that could fall into the innovators or early adopters category even if they were otherwise inclined to pick up a new innovation.  Policy can directly influence adoption cost by driving it down through subsidy, favorable tax treatment and streamlined regulation.   It can also go the other way and drive cost up by making adoption more complex, favoring competing legacy technologies with tax breaks or implementing regulatory hurdles for new competitors.

Unfortunately Alabama’s leadership hasn’t, well, led.  Take solar power as an example.  This innovation is seeing broad adoption across the US, helping to offset power usage during daytime peak periods and providing good paying technical employment.  Some states have taken steps to accelerate solar rollout, implementing net metering requirements, tax credits and other policies to drive down the cost and make it accessible to more adopters.  Alabama, despite having ample sun as a resource and needing the extra power during sunny hot days to offset the peak loads created by air conditioning has not only taken almost no meaningful steps toward making solar more affordable but has in fact implemented unnecessary regulatory hurdles.  Ride sharing provides another example.  The Birmingham City Council fought to keep ride sharing services from launching in the city, asking for onerous regulation that almost kept this innovation out entirely.  Now Alabama is proposing mandatory content filters on all mobile devices, with a fee to remove it.  This sort of thinking absolutely drives down adoption by reducing choice and increasing costs.  I’m not going to speculate about the reasons for all of these artificial barriers, but the outcomes are clear.

Automation, Adoption and Jobs

Why do we need to be concerned about technology adoption in Alabama?  In a word:  Jobs.  Right now a lot of job growth (especially automotive) in Alabama is driven by low cost labor, which is in turn enabled by the low cost of living in the state.  The massive tax breaks the state gives to large employers don’t hurt either.  Other large employment categories in the state include retail and cashiers, material handlers, agriculture and truck drivers.  All of these jobs are ripe for disruption by automation.  As soon as the price of automation drops below the cost of labor, these workers will begin to be replaced.  This is, in my opinion, the best case scenario.  The worst case is that our lack of technology adoption will lead us to resist automation, which will ultimately make us uncompetitive and lead to the current influx of economic activity turning quickly into an exodus.  If we don’t solve the adoption problem, not only will we lose the economic growth we have fought so hard to gain, but we won’t be able to ride the next wave of jobs that will come from automation.

What do we need?

It’s a lot to ask, but what we need are massive investments in all levels of education, restructuring of our tax system to put the means for adoption into the hands of more people and more innovation friendly regulation all coordinated in a single large push to prepare us for the future.  We have an opportunity to use our currently improving economic situation to make the kind of strategic investments that will prepare Alabama for the next century, but we need to start now.  Will our leaders answer the call?

Alfresco Premier Services – New Blog

Screen Shot 2017-03-27 at 9.14.38 PM

I’m not shy about saying the best thing about my job is my team.  Never in my career have I worked with such a dedicated, skilled and fun group of people.  Whether you are talking about our management team or our individual contributors, the breadth and depth of experience across the Alfresco Premier Services team is impressive.  Members of our team have developed deep expertise across our product line, from auditing to RM, from workflow to search.  Some folks on the team have even branched out and started their own open source projects around Alfresco.  We have decided to take the next step in sharing our knowledge and launch a team blog.

The inaugural post details some recent changes to the Alfresco Premier Services offerings that coincide with the digital business platform launch.  We will follow that up in short order with new articles covering both content and process services including guidance on FTP load balancing and pulling custom property files into a process services workflow.

Lots of exciting stuff to come from the Premier Services Team, stay tuned!