Rethinking the Migration Pipeline with AWS

Migration

Traditionally, migrating content to an Alfresco instance has usually looked very much like an ETL process.  First content, metadata, and structure is extracted from a source system (be it a shared drive, some legacy ECM system, etc), that data may then be transformed in some way, and then it gets imported into Alfresco.  Using the OOTB tools the last step is typically accomplished via the BFSIT (Bulk Filesystem Import Tool).  This approach has a lot of advantages.  It’s well understood, growing from an incredibly common model for database migrations and BI activities.  An ETL-like migration approach provides plenty of opportunities to fix data quality issues while the data is in a “staged” state.  Properly optimized, this approach can be very fast.  In the best cases it only requires re-linking existing content objects into the repository and creating the associated metadata, no in-flight copying of large content files required.  For many migrations, this approach makes perfect sense.  In the on-premises world, even more so.

The ETL approach does have some downsides.  It’s not great at dealing with a repository that is actively being used, requiring delta migrations later.  It can be a hard thing to scale up (faster per load) and out (more loads running in parallel), especially on-premises where companies are understandably reluctant to add more hardware for what should be a short term need.  It’s typically batch driven, and an import failure of one or more nodes in the batch can really slow things down.

A Better Way?

Alfresco on AWS opens up some new ways of looking at migrations.  Recently I was catching up on some reading and had the occasion to revisit a pair of articles by Gavin Cornwell.  In these two articles Gavin lays out some potential options for using AWS AI services such as Comprehend and Rekognition with Alfresco.  Specifically, he takes a look at AWS Lambda and Step Functions.  Reading that article got me thinking about ingestion pipelines, which in turn got me thinking about system migrations.  We do a LOT of these in Alfresco consulting, and we rely on a toolbox of Alfresco products, our own internal tools, and excellent solutions that are brought to the party by Alfresco partners.

The question that I kept circling around is this:  Does a move to AWS change the patterns that apply to migration?  I think it does.  Full disclosure, this is still very much a work in progress, probably deviates from best practices in a near infinite number of ways and is a personal science project at this point.

AWS offers some opportunities to rethink how to approach a migration.  Perhaps the biggest opportunity comes from the ability to think of the migration as a scalable, event-driven process.  Instead of pushing all of the content up to S3 and then importing or linking it into the repository using an in-process import tool or some external tooling that processes the list of content that exists in S3, it is possible to use the events emitted by writing to S3 itself to take care of validating, importing and performing post-processing on the content.  Content items are written to S3, which in turn triggers a per-item process that performs the steps required to get the content into the repository.  This is a perfect fit for the serverless compute paradigm provided by AWS.

Consider the following Step Functions State Machine diagram:

Screen Shot 2018-05-09 at 11.52.03 AM

For each item written to S3, (presumably, but not necessarily, a content item and a paired object that represents the metadata) S3 can emit a CloudWatch event.  This event matches a rule, which in turn starts a Step Function State Machine.  The state machine neatly encapsulates all of the steps required to validate the content and metadata, import the content into the repository and apply the validated metadata, and then optionally perform post processing such as image recognition, natural language processing, generation of thumbnails, etc.  Each step in the process maps to an AWS Lambda function which encapsulates one granular part of the validation and import process.  Most of the heavy lifting is done outside of Alfresco, which reduces the need to scale up the cluster during the import.  It should be possible to run a huge number of these step functions in parallel, speeding up the overall migration process while only paying for the compute consumed by the migration itself.  If anything goes wrong up to the point where the import of this specific item is complete it can be rolled back.

Perhaps the best thing about this approach is that it is so easy to adapt.  No two migrations are identical, even when the source and target platforms are common, so flexibility is key.  Flexibility here not only means flexibility during initial design, but also adjustments to the migration to fix issues discovered in flight or support for more than one import pipeline.  Need to change the way validation is done?  Modify the validation Lambda function.  Don’t need image or text classification, or want to use a different engine?  Remove or change the relevant functions.  Need to decorate imported docs with metadata pulled from another LOB system?  Insert a new step into the process.  It’s easy to change.

Over the next few articles we’ll dive deeper into exactly how this could work in detail, and build out some example Lambda functions to fill in the process and make it do real work.

Making Chat Content Flow with Alfresco

chat to alfresco

Let’s start with an axiom:  In a modern business chat is, as much as email, where business gets done.  Every company I have worked with or for in the past decade has come to increasingly rely on chat to coordinate activities within and across teams.  This is great, as chat provides a convenient, asynchronous way to work together.  It fits nicely in that niche between a phone call and an email in terms of urgency of response and formality.  It is no surprise then, that the uptake of chat tools for business has been high.

I’m a big proponent of using chat for our teams, but using it has uncovered a few challenges.  One of them is managing the content that comes out of chats.  Just about every chat tool has a content sharing facility for sending docs back and forth.  That’s great, but what happens to that content once it is pasted into a chat?  Often that content becomes somewhat ephemeral, maybe viewed when it is dropped into the chat but then forgotten.  What happens when you have segments of a chat that are valuable and should be saved?  If you are using chat as a support channel, for example, that chat content may well form the foundation for knowledge that you want to capture and reuse.  If the chat is related to a deal that sales is working, you might want to capture it as part of your sales process so others know what is happening.

This sort of “knowledge leakage” in chat can be partially solved by the search functions in chat, but that is often limited.  Some tools can only go back a certain amount of time or a certain number of messages with the baked in search functions.  The quality of this search depends on which tool you are using and whether you are using a paid or free version of that tool.  Frustratingly, chat tools do not typically index the content of documents shared via chat.  This means you can probably search for the title of the document or the context in which it was shared, but not the content of the document itself.  Regardless of the quality of search, it only solves part of the problem.  Content in chat may be discoverable, but it isn’t easily shareable or captured in a form that is useful for attaching or including in other processes.  In short, chat content creates chaos.  How can we tame this and make chat a better channel for sharing, capturing, curating and finding real knowledge?

Within our Customer Success team at Alfresco we have done some small research projects into this problem, and have even solved it to a certain extent.  Our first crack at this came in the form of an application that captures certain chats from our teams and saves those chat logs into an Alfresco repository.  This is a great start, as it partially solves one of our problems.  Chat logs are no longer ephemeral, they are captured as documents and saved to Alfresco’s Content Services platform.  From there they are indexed, taggable and linkable, so we can easily share something that came up in the chat with others, in context.  This approach is great for capturing whole chats, but what about saving selected segments, or capturing documents that are attached to chats?

Solving both of these problems is straightforward with Alfresco’s content services platform, a good chat tool with a great API, and a little glue.  For this solution I have set out a few simple goals:

  1. The solution should automatically capture documents added to a chat, and save those documents to Alfresco.
  2. The solution should post a link to the saved document in the chat in which the document originated so that it is easy to find in the content repository.  This also ensures that captured chat logs will have a valid link to the content.  We could ask people to post a doc somewhere and then share a link in the chat, but why do that when we can make it frictionless?
  3. The solution should allow chats or segments of chats to be captured as a text document and saved to Alfresco.
  4. The solution should allow for searching for content without leaving the chat, with search results posted to the chat.

Looking at the goals above, a chat bot seems like a fairly obvious solution.  Chat bots can listen in on a chat channel and act automatically when certain things happen in the chat or can be called to action discretely as needed.  A simple chat bot that speaks Alfresco could meet all of the requirements.  Such a bot would be added to a chat channel and listen for documents being uploaded to the chat.  When that happens the bot can retrieve the document from the chat, upload it to an Alfresco repository, and then post the link back to the chat.  The bot would also need to listen for itself to be called upon to archive part of a chat, at which point it retrieves the specified part of the chat from the provider’s API, saves it to Alfresco and posts a link back to the chat.  Finally, the bot would need to listen for itself to be invoked to perform a search, fetch the search terms from the chat, execute a search in Alfresco and post the formatted results back to the chat channel.  This gives us the tools we need to make content and knowledge capture possible in chat without putting a bunch of extra work on the plate of the chat users.

I’ve put together a little proof of concept for this idea, and released it as an open source project on Github (side note, thank you Andreas Steffan for Dockerizing it!).  It’s implemented as a chatbot for Slack that uses the BotKit framework for the interaction bits.  It’s a simplistic implementation, but it gets the point across.  Upon startup the bot connects to your slack and can be added to a chat just like any other bot / user.  Once it is there, it will listen for its name and for file upload events, responding more or less as described above.  Check out the readme for more info.

I’d love to see this improve a bit and get better.  A few things that I’d like to do right away:

  1. Rework the bot to use the Alfresco Unified Javascript API instead of CMIS and direct REST API calls.
  2. Get smarter about the way content gets stored in Alfresco.  Right now it’s just a little logic that builds a folder structure by channel, this could be better.
  3. Improve the way metadata is handled for saved documents / chats so they are easier to find in the future.  Maybe a field that stores chat participant names?
  4. Some NLP smarts.  Perhaps run a chat excerpt through AWS Comprehend and tag the chat excerpt with the extracted topics?
  5. Workflow integration.  I’d love to see the ability to post a document to the chat and request a review which then triggers an Alfresco Process Services process.

If you want to work on this together, pull requests are always welcome!

 

Alfresco Javascript API and AWS Lambda Functions. Part 1, Lambda and Cloud9 101

lambda+alfresco

I’ve written before about several ways to make use of AWS Lambda functions within your Alfresco Content and Process Services deployment.  In preparation for my DevCon talk, I’m diving deeper into this and building out some demos for the session.  I figured this is a good time to do a quick writeup on one way to approach the problem.

What is a Lambda function?

Briefly, AWS Lambda is a “serverless” compute service.  Instead of the old model of provisioning an EC2 instance, loading software and services, etc, Lambda allows you to simply write code against a specific runtime (Java, Node, Python, or .NET) that is executed when an event occurs.  This event can come from a HUGE number of sources within the AWS suite of services.  When your function is called, it is spun up in a lightweight, configurable container and run.  That container may or may not be used again, depending on several factors.  AWS provides some information about what happens under the hood, but the idea is that most of the time you don’t need to sweat it.  In summary, a Lambda function is just a bit of code that runs in response to a triggering event.

Preparing the Lambda package

Creating a Lambda function through the AWS UI is trivial.  A few clicks, a couple form fields, and you’re done.  This is fine for simple functions, but what about when you need to use an external library?  The bad news is that this takes a couple extra steps.  The good news is that once you have done it, you can move on to a more productive workflow.  The sticking point from doing it all through the AWS console is the addition of libraries and dependencies.  We can get around that by using a Zip package to start out project.  The Zip package format is pretty simple.  Let’s create one, we’ll call it AlfrescoAPICall.

Start by creating an empty director for your project, and changing into that directory:

mkdir AlfrescoAPICall

cd AlfrescoAPICall

Next, create a handler for your Lambda function.  The default name for the handler is index.js, but you can change it so long as you configure your Lambda function appropriately.

touch index.js

Now, use npm to install the modules you need into your project directory.  For this example, we’ll use alfresco-js-api.

npm install alfresco-js-api

A real project probably wouldn’t just install the needed modules piecemeal, it makes more sense to define all of the dependencies in package.json instead.  Regardless, at the end of this you should have a project root folder that contains your Lambda function handler, and a node_modules directory that contains all of your dependencies.  Next, we need to Zip this up into a Lambda deployment package.  Don’t Zip up the project folder, we need to Zip up the folder contents instead.

zip -r AlfrescoAPICall.zip .

And that’s it!  AlfrescoAPICall.zip is the Lambda package that we’ll upload to AWS so we can get to work.  We don’t need to do this again unless the dependencies or versions change.

Getting it into AWS

There are a few ways to get our newly created deployment package up to AWS.  It can be done using the AWS CLI, or with the AWS console.  If you do it via the CLI, it will look something like this:

aws lambda update-function-code –function-name AlfrescoAPICall –zip-file AlfrescoAPICall.zip

If you do it via the AWS console, you can simply choose the file to upload :

Screen Shot 2017-12-14 at 4.53.50 PM

Regardless of how you update your Lambda function, once you have uploaded your zip archive you can see the entire project in the “edit code inline” view in the Lambda page:

Screen Shot 2017-12-14 at 4.55.17 PM

Woo hoo!  Now we have a skeleton Lambda project that includes the Alfresco Javascript Client API and we can start building!

Starting development in earnest

This process is simple, but it isn’t exactly the easiest way to build in AWS.  With the relaunch of Cloud9 at re:Invent, AWS has a pretty good web based IDE that we can use for this project.  I’m not going to go through all the steps of creating a Cloud9 environment, but once you have it created you should see your newly created Lambda function in the right-hand pane under the AWS Resources tab.  If you don’t, make sure the IAM account you are using with Cloud9 (not the root account!!) has access to your function.  It will be listed under Remote Functions.  Here’s where it gets really cool.  Right click on the remote function, and you can import the whole thing right into your development environment:

Screen Shot 2017-12-14 at 5.07.00 PM

 

Neat, right?  After you import it you’ll see it show up in the project explorer view on the left.  From here it is basically like any other IDE.  Tabbed code editor, tree views, syntax highlighting and completion, etc, etc.

Screen Shot 2017-12-14 at 5.07.17 PM

One cool feature of Cloud9 is the ability to test run your Lambda function locally (in the EC2 instance Cloud9 is connected to) or on the AWS Lambda service itself by picking the option from the menu in the run panel. As one would expect, you can also set breakpoints in your Lambda function for debugging:

Screen Shot 2017-12-14 at 5.19.51 PM

Finally, once you are done with your edits and have tested your function to your satisfaction, getting the changes back up to the Lambda service is trivial.  Save your changes, right click on the local function, and select deploy:

Screen Shot 2017-12-14 at 8.35.46 PM

Simple, right?  Now we have a working Lambda function with the Alfresco Javascript Client API available, and we can start developing something useful!  In part two, we’ll continue by getting Alfresco Process Services up and running in AWS and extend this newly created function to interact with a process.

 

Alfresco DevCon is Back in 2018

DevCon2018_Logo_Grey (1)

Alfresco DevCon has always been my favorite of the many events that we host around the world.  Unfortunately it has been on a bit of a hiatus lately as we explored other ways to connect with the many personas that we work with every day.  Good news though, it’s back in 2018!  Alfresco brings to market an awesome platform for accelerating digital business, and you can’t be a platform company without giving your developers a big friendly hug (and tools, and best practices, and a whole host of other things).  This is THE best opportunity to come and interact with all of the extended Alfresco developer family.  I’ve had a sneak peak at the presenter list and it’s an incredibly diverse group pulled from many of our stellar customers, partners, community members, and of course, a healthy dose of Alfrescans from across the technical parts of the company.

In what turned out to be a happy coincidence, I received the notice that my talks were accepted on my birthday.  Talk about a great present!  I’ve signed up to do two talks.  The first is a lightning talk on using natural language processing techniques to improve the quality of enterprise search.  The lightning talk got a shot in the arm with the recent announcement of AWS Comprehend.  Originally this talk was just going to talk about some on-premises offerings as well as Google Cloud NLP.  I’m excited to play around with the new AWS service and see how it stacks up.

The second talk I’m going to do in Portugal is a full length presentation on using Alfresco Process Services with AWS IoT,

Screen Shot 2017-12-13 at 12.34.22 PM

Originally the IoT talk was going to use a custom device built, probably a Raspberry Pi or an Arduino based device.  However, when one of these little guys showed up in the mail I decided to change it up a bit and use Amazon hardware.

IMG_20171213_123848.jpg

If these topics sound familiar they should because I’ve blogged about both of them in the past.  Alfresco DevCon provides a great opportunity to build on and refine those ideas and vet them with a highly technical audience.  Hopefully the talks will spark some interesting conversations afterward.

I can’t overstate how happy I am that DevCon is back, and I can’t wait to see all of you in Lisbon in January!

 

 

(Possibly) Enhancing Alfresco Search Part 2 – Google Cloud’s Natural Language API

google-nl-alfresco-search

In the first article in this series, we took a look at using Stanford’s CoreNLP library to enrich Alfresco Content Services metadata with some natural language processing tools.  In particular, we looked at using named entity extraction and sentiment analysis to add some value to enterprise search.  As soon as I posted that article, several people got in touch to see if I was working on testing out any other NLP tools.  In part 2 of this series, we’ll take a look at Google Cloud’s Natural Language API to see if it is any easier to integrate and scale, and do a brief comparison of the results.

One little thing I discovered during testing that may be of note if anybody picks up the Github code to try to do anything useful with it:  Alfresco and Google Cloud’s Natural Language API library can’t play nice together due to conflicting dependencies on some of the Google components.  In particular, Guava is a blocker.  Alfresco ships with and depends on an older version.  Complicating matters further, the Guava APIs changed between the version Alfresco ships and the version that the Google Cloud Natural Language API library requires so it isn’t as straightforward as grabbing the newer Guava library and swapping it out.  I have already had a quick chat with Alfresco Engineering and it looks like this is on the list to be solved soon.  In the meantime, I’m using Apache HttpClient to access the relevant services directly.  It’s not quite as nice as the idiomatic approach that the Google Cloud SDK takes, but it will do for now.

Metadata Enrichment and Extraction

The main purpose of these little experiments has been to assess how suitable each tool may be for using NLP to improve search.  This is where, I think, Google’s Natural Language product could really shine.  Google is, after all, a search company (and yes, more than that too).  Google’s entity analyzer not only plucks out all of the named entities, but it also returns a salience score for each.  The higher the score, the more important or central that entity is to the entire text.  The API also returns the number of proper noun mentions for that entity.  This seems to work quite well, and the salience score isn’t looking at just the number of mentions.  During my testing I found several instances where the most salient result was not that which was mentioned the most.  Sorting by salience and only making those most relevant mentions searchable metadata in Alfresco would be useful.  Say, for example, we are looking for documents about XYZ Corporation.  A simple keyword search would return every document that mentions that company, even if the document wasn’t actually about it.  Searching only those documents where XYZ Corporation is the most salient entity (even if not the most frequently mentioned) in the document would give us much more relevant results.

Sentiment analysis is another common feature in many natural language processing suites that may be useful in a context services search context.  For example, if you are using your content services platform to store customer survey results, transcripts of chats or other documents that capture an interaction you might want to find those that were strongly negative or positive to serve as training examples.  Another great use case exists in the process services world, where processes are likely to capture interactions in a more direct fashion.  Sentiment analysis is an area where Google’s and CoreNLP’s approaches differ significantly.  The Google Natural Language API provides two ways to handle sentiment analysis.  The first analyzes the overall sentiment of the provided text, the second provides sentiment analysis related to identified entities within the text.  These are fairly simplistic compared with the full sentiment graph that CoreNLP generates.  Google ranks sentiment along a scale of -1 to 1, with -1 being the most negative, and 1 the most positive.

Lower Level Features

At the core of any NLP tool are the basics of language parsing and processing such as tokenization, sentence splitting, part of speech tagging, lemmatization, dependency parsing, etc.  The Google Cloud NL API exposes all of these features through its syntax analysis API and the token object.  The object syntax is clear and easy to understand.  There are some important differences in the way these are implemented across CoreNLP and Google Cloud NL, which I may explore further in a future article.

Different Needs, Different Tools

Google Cloud’s Natural Language product differs from CoreNLP in some important ways.  The biggest is simply the fact that one is a cloud service and one is traditionally released software.  This has its pros and cons, of course.  If you roll your own NLP infrastructure with CoreNLP (whether you do it on-premises or in the cloud) you’ll certainly have more control but you’ll also be responsible for managing the thing.  For some use cases this might be the critical difference.  Best I can tell, Google doesn’t allow for custom models or annotators (yet).  If you need to train your own system or build custom stuff into the annotation pipeline, Google’s NLP offering may not work for you.  This is likely to be a shortcoming of many of the cloud based NLP services.

Another key difference is language support.  CoreNLP ships models for English, Arabic, Chinese, French, German and Spanish, but not all annotators work for all languages.  CoreNLP also has contributed models in other languages of varying completeness and quality.  Google Cloud’s NLP API has full fledged support for English, Japanese and Spanish, with beta support for Chinese (simplified and traditional), French, German, Italian, Korean and Portuguese.  Depending on where you are and what you need to analyze, language support alone may drive your choice.

On the feature front there are also some key differences when you compare “out of the box” CoreNLP with the Google Cloud NL API.  The first thing I tested was entity recognition.  I have been doing a little testing with a collection of short stories from American writers, and so far both seem to do a fair job of recognizing basic named entities like people, places, organizations, etc.  Google’s API goes further though and will recognize and tag things like the names of consumer goods, works of art, and events.  CoreNLP would take more work to do that sort of thing, it isn’t handled by the models that ship with the code.  On sentiment analysis, CoreNLP is much more comprehensive (at least in my admittedly limited evaluation).

Scalability and ergonomics are also concerns. If you plan to analyze a large amount of content there’s no getting around scale.  Without question, Google wins, but at a cost.  The Cloud Natural Language API uses a typical utilization cost model.  The more you analyze, the more you pay.  Ergonomics is another area where Google Cloud NL has a clear advantage.  CoreNLP is a more feature rich experience, and that shows in the model it returns.  Google Cloud NL API just returns a logically structured JSON object, making it much easier to read and interpret the results right away.  There’s also the issue of interface.  CoreNLP relies on a client library.  Google Cloud NL API is just a set of REST calls that follow the usual Google conventions and authentication schemes.  There has been some work to put a REST API on top of CoreNLP, but I have not tried that out.

The more I explore this space the more convinced I am that natural language processing has the potential to provide some significant improvements to enterprise content search, as well as to content and process analytics.

 

(Possibly) Enhancing Alfresco Search with Stanford CoreNLP

corenlp + alfresco

Laurence Hart recently published an article on CMSWiRE about AI and enterprise search that I found interesting.  In it, he lays out some good arguments about why the expectations for AI and enterprise search are a bit overinflated.  This is probably a natural part of they hype cycle that AI is currently traversing.  While AI probably won’t revolutionize enterprise search overnight, it definitely has the potential to offer meaningful improvements in the short term.  One of the areas where I think we can get some easy improvements is by using natural language processing to extract things that might be relevant to search, along with some context around those things.  For example, it is handy to be able to search for documents that contain references to people, places, organizations or specific dates using something more than a simple keyword search.  It’s useful for your search to know the difference between the china you set on your dinner table and China the country, or Alfresco the company vs eating outside.  Expanding on this work, it might also be useful to do some sentiment analysis on a document, or extract specific parts of it for automatic classification.

Stanford offers a set of tools to help with common natural language processing (NLP) tasks.  The Stanford CoreNLP project consists of a framework and variety of annotators that handle tasks such as sentiment analysis, part of speech tagging, lemmatization, named entity extraction, etc.  My favorite thing about this particular project is how they have simply dropped the barriers to trying it out to zero.  If you want to give the project a spin and see how it would annotate some text with the base models, Stanford helpfully hosts a version you can test out.  I spent an afternoon throwing text at it, both bits I wrote, and bits that come from some of my test document pool.  At first glance it seems to do a pretty good job, even with nothing more than the base models loaded.

I’d like to prove out some of these concepts and explore them further, so I’ve started a simple project to connect Stanford CoreNLP with the Alfresco Content Services platform.  The initial goals are simple:  Take text from an document stored in Alfresco, run it through a few CoreNLP annotators, extract data from the generated annotations, and store that data as Alfresco metadata.  This will make annotation data such as named entities (dates, places, people, organizations) directly searchable via Alfresco Search Services.  I’m starting with an Alfresco Repository Action that calls CoreNLP since that will be easy to test on individual documents.  It would be pretty straightforward to take this component and run it as a metadata extractor, which might make more sense in the long run.  Like most of my Alfresco extension or integration projects, this roughly follows the Service Action Pattern.

Stanford CoreNLP makes the integration bits pretty easy.  You can run CoreNLP as a standalone server, and the project helpfully provides a Java client (StandfordCoreNLPClient) that somewhat closely mirrors the annotation pipeline so if you already know how to use CoreNLP locally, you can easily get it working from an Alfresco integration.  This will also help with scalability since CoreNLP can be memory hungry and running the NLP engine in a separate JVM or server from Alfresco definitely makes sense.  It also makes sense to be judicious about what annotators you run, so that should be configurable in Alfresco.  It also make sense to limit the size of the text that gets sent to CoreNLP, so long term some pagination will probably be necessary to break down large files into more manageable pieces.  The CoreNLP project itself provides some great guidance on getting the best performance out of the tool.

A couple of notes about using CoreNLP programmatically from other applications.  First, if you just provide a host name (like localhost) then CoreNLP assumes that you will be connecting via HTTPS.   This will cause the StanfordCoreNLPClient to not respond if your server isn’t set up for it.  Oddly, it also doesn’t seem to throw any kind of useful exception, it just sort of, well, stops.  If you don’t want to use HTTPS, make sure to specify the protocol in the host name.  Second, Stanford makes it pretty easy to use CoreNLP in your application by publishing on Maven Central, but the model jars aren’t there.  You’ll need to download those separately.  Third, CoreNLP can use a lot of memory for processing large amounts of text.  If you plan to do this kind of thing at any kind of scale, you’ll need to run the CoreNLP bits on a separate JVM, and possibly a separate server.  I can’t imagine that Alfresco under load and CoreNLP in the same JVM would yield good results.  Fourth, the client also has hefty memory requirements.  In my testing, running CoreNLP client in an Alfresco action with less than 2GB of memory caused out of memory errors when processing 5-6 pages of dense text.  Finally, the pipeline that you feed CoreNLP is ordered.  If you don’t have the correct annotators in there in the right order, you won’t get the results you expect.  Some annotators have dependencies, which aren’t always clear until you try to process some text and it fails.  Thankfully the error message will tell you what other annotators you need in the pipeline for it to work.

After some experimentation I’m not sure that CoreNLP is really well suited for integration with a content services platform.  I had hoped that most of the processing using StanfordCoreNLPClient to connect to a server would take place on the server, and only results would be returned but that doesn’t appear to be the case.  I still think that using NLP tools to enhance search has merit though.  If you want to play around with this idea yourself you can find my PoC code on Github.  It’s a toy at this point, but might help others understand Alfresco, some intricacies of CoreNLP, or both.  As a next step I’m going to look at OpenNLP and a few other tools to better understand both the concepts and the space.

 

AWS Lambda and Alfresco – Connecting Serverless to Content and Process

lambda+alfresco

Let’s start with a rant.  I don’t like the term “Serverless” to describe Lambda or other function as a service platforms.  Yeah, OK, so you don’t need to spin up servers, or worry about EC2 instances, or any of that stuff.  Great.  But it still runs on a server of some sort.  Even nascent efforts to extend “Serverless” to edge devices still have something that could be called a server, the device itself.  If it provides a service, it’s a server.  It’s like that Salesforce “No Software” campaign.  What?  It’s freaking software, no matter what some marketing campaign says.  It looks like the name is going to stick, so I’ll use it, but if you wonder why I look like I’ve just bit into a garden slug every time I say “Serverless”, that’s why.

Naming aside, there’s no doubt this is a powerful paradigm for writing, running and managing code.  For one, it’s simple.  It takes away all the pain of the lower levels of the stack and gives devs a superbly clean and easy environment.  It is (or should be) scalable.  It is (or should be) performant.  The appeal is easy to see.  Like most areas that AWS colonizes, Lambda seems to be the front runner in this space.

You know what else runs well in AWS?  Alfresco Content and Process Services.

Lambda -> Alfresco Content / Process Services

It should be fairly simple to call Alfresco Content or Process Services from AWS Lambda.  Lambda supports several execution environments, all of which support calling an external URL.  If you have an Alfresco instance running on or otherwise reachable from AWS, you can call it from Lambda.  This does, however, require you to write all of the supporting code to make the calls.  One Lambda execution environment is Node.js, which probably presents us with the easiest way to get Lambda talking to Alfresco.  Alfresco has a recently released Javascript client API which supports connections to both Alfresco Content Services and Alfresco Process Services.  This client API requires at least Node.js 5.x.  Lambda supports 6.10 at the time this article was written, so no problem there!

Alfresco Content / Process Services -> Lambda

While it’s incredibly useful to be able to call Alfresco services from Lambda, what about triggering Lambda functions from the Alfresco Digital Business Platform?  That part is also possible, exactly how to do it depends on what you want to do.  Lambda supports many ways to invoke a function, some of which may be helpful to us.

S3 bucket events

AWS Lambda functions can be triggered in response to S3 events such as object creation or deletion.  The case that AWS outlines on their web site is a situation where you might want to generate a thumbnail, which Alfresco already handles quite nicely, but it’s not hard to come up with others.  We might want to do something when a node is deleted from the S3 bucket by Alfresco.  For example, this could be used to trigger a notification that content cleanup was successful or to write out an additional audit entry to another system.  Since most Alfresco DBP deployments in AWS use S3 as a backing store, this is an option available to most AWS Alfresco Content or Process Services instances.

Simple Email Service

Another way to trigger a Lambda function is via the AWS Simple Email Service.  SES is probably more commonly used to send emails, but it can also receive them.  SES can invoke your Lambda function and pass it the email it received.  Sending email can already easily be done from both an Alfresco Process Services BPMN task and from an Alfresco Content Services Action, so this could be an easy way to trigger a Lambda function using existing functionality in response to a workflow event or something occurring in the content repository.

Scheduled Events

AWS CloudWatch provides a scheduled event capability for CloudWatch Events.  These are configured using either a fixed rate or a cron expression, and use a rule target definition to define which Lambda function to call.  A scheduled event isn’t really a way to call Lambda functions from ACS or APS, but it could prove to be very useful for regular cleanup events, archiving or other recurring tasks you wish to run against your Alfresco Content Services instances in AWS.  It also gives you a way to trigger things to happen in your APS workflows on a schedule, but that case is probably better handled in the process definition itself.

API Gateway

Our last two options would require a little work, but may turn out to be the best for most use cases.  Using an API Gateway you can define URLs that can be used to directly trigger your Lambda functions.  Triggering these from Alfresco Process Services is simple, just use a REST call task to make the call.  Doing so from Alfresco Content Services is a bit trickier, requiring either a custom action or a behavior that makes the call out to the API gateway and passes it the info your Lambda function needs to do its job.  Still fairly straightforward, and there are lots of good examples of making HTTP calls from Alfresco Content Services extensions out there in the community.

SNS

AWS Simple Notification Service provides another scalable option for calling Lambda functions.  Like the API gateway option, you could use a REST call task in Alfresco Process Services, or a bit of custom code to make the call from Alfresco Content Services.  AWS SNS supports a simple API for publishing messages to a topic, which can then be used to trigger your Lambda function.

There are quite a few ways to both use Alfresco Process and Content services from Lambda functions, as well use Lambda functions to enhance your investment in Alfresco technologies.  I plan to do a little spike to explore this further, stay tuned for findings and code samples!

 

13 Essentials for Building Great Remote Teams

alfresco_support

It’s been a while since I wrote a listicle, and this stuff has been on my mind a lot lately.  About two years ago I assumed my current role as Alfresco’s Global Director of Premier Support Services.  The Premier Services team is scattered across the world, with team members from Sydney to Paris and just about everywhere in between.  This has been my first time running a large distributed team, here are some things I’ve found essential to making it work.  Some are things you need to have, some are things you need to do:

  1. Find and use a good chat tool.  When your team is spread around the world you can’t live without a good tool for informal asynchronous communications.
  2. But not too many chat tools.  Seriously, this is a problem.  Ask you team what they like, settle on one and stick with it, otherwise you end up with silos, missed messages and a confused group of people.
  3. Use video, even if it feels weird.  Voice chat is great, but there’s no substitute for seeing who you are talking with.  In his book “Silent Messages“, Dr. Albert Mehrabian attributes up to 55% of the impact of a message to the body language of the person presenting the message.  You can’t get that from voice chat alone.
  4. Take advantage of document sharing and collaboration.  A big percentage of our work results in unstructured content in the form of spreadsheets, reports, etc.  We need easy ways to find, collaborate on and share that stuff.  We use Alfresco, naturally.
  5. Have regular face-to-face meetings.  These can be expensive and time consuming, but there is no substitute for meeting in person, shaking hands and sharing a cup of coffee or lunch.  This is especially true for new additions to the crew, during that honeymoon period you need to meet.
  6. Make smart development investments.  When most of your team is remote it is easy to start to feel disconnected from your organization.  Over 5 years of working remotely both as an individual contributor and a leader I know I have felt that way from time to time.  Investing in your team’s professional development is a great way to help them reconnect.  It’s even better if you can use this as an opportunity to get some face time, for example by sending a couple of people from your team to the same conference so they can get to know each other.
  7. Celebrate success, no matter how small.  When everybody works together under one roof it’s easy to congratulate somebody on your team for a job well done.  It’s easy to pull the team together to celebrate a release, or a project milestone, or whatever.  When everybody is remote that becomes simultaneously harder and more important.  Don’t be shy about calling somebody out in your chat, via email or in a team call when they score a win.  Think to yourself “Is this the kind of thing I would walk over and thank somebody for in person?”.  If the answer is yes, then mention it in public.
  8. Raise your team’s profile.  It has been said that a boss takes credit, a leader gives it.  When your entire team is spread around the globe you, as their lead, serve to a certain extent as their interface to upper management and to leaders in other areas of the company.  Use this to your team’s benefit by raising the profile of your top performers to your leadership and to your peers.  When you bring a good idea from your staff to your leadership, your team or anybody else in your company, make sure you let them know exactly where it came from.
  9. Build lots of bridges.  A lot of these essentials come back to the risk of a remote team member becoming disconnected or disengaged.  One way to prevent this is to help your team get and stay engaged in areas other than your own.  Every company I have ever worked for has cross functional teams and initiatives.  Find the places where your teams’ skills and bandwidth align with those cross functional needs and get them connected.  They’ll learn something new, share what they know and contribute to the success of the team.
  10. Shamelessly signal boost.  Many people on my team are active on social media, or with our team blog, or on other channels for knowledge sharing and engagement.  I absolutely encourage this, sharing our knowledge with peers, customers, partners and community members helps everybody.  It takes effort though, an effort that often goes beyond somebody’s core job role.  It’s also a bit scary at times, putting yourself and your ideas out there for everybody to see (and potentially criticize).  If somebody on your team takes the time and the risk, help boost them a bit.  Retweet their post, share it on LinkedIn, post it to internal chats, etc.  Not only will you be helping them spread the knowledge around, but you’re also lending your credibility to their message.
  11. Have defined development paths within (and out of!) your team.  A lot of promotions come from networking, cross functional experiences, word of mouth and other things that are harder to achieve when you work remotely.  As a leader of a remote team, it’s your responsibility to help your people understand the roles within your organization, what is required to move into those roles, and how to get there.  It’s also your job to make sure they know about great opportunities outside your team.  I want my people to be successful, however they define success.  That might be in my team or it might be elsewhere in the company.
  12. Be clear about your goals and how you’ll measure them.  If you have the sort of job that lets you work from home, odds are you aren’t punching a clock.  If you do come into an office every day but your boss is elsewhere 95% of the time, nobody is hovering around making sure you’re there.  Work should be a thing you do, not necessarily a place you go.  The only way this works is if everybody is clear about what we are all trying to achieve together, who’s responsible for what, and how we’ll measure the outcome.  If we all agree on that, you can work from the moon if your internet connection is fast enough.
  13. Trust.  I put this one last because it is easily the most important.  It’s important from the moment you hire somebody that you won’t see in person every day.  Put simply, you cannot possibly have a successful remote organization if you don’t trust the people you work with.  Full stop.  You have to nurture a culture of trust where people aren’t afraid to speak up, where transparency is a core value.

Is this list comprehensive?  Of course not.  Do I still struggle to do this stuff?  Every day, but I keep trying.

A Simple Pattern for Alfresco Extensions

Over the years I have worked with and for Alfresco, I have written a ton of Alfresco extensions.  Some of these are for customers, some are for my own education, some for R&D spikes, etc.  I’d like to share a common pattern that comes in handy.  If you are a super experienced Alfresco developer, this article probably isn’t for you.  You know this stuff already!

There are a lot of ways to build Alfresco extensions, and a lot of ways to integrate your own code or connect Alfresco to another product.  There are also a lot of ways you might want to call your own code or an integration, whether that is from an Action, a Behavior, a Web Script, a scheduled job, or via the Alfresco Javascript API.  One way to make your extension as flexible as possible is to use what could informally be called the “Service Action Pattern”.

The Service Action Pattern

service_action_pattern_sequence (1)

Let’s start by describing the Service Action Pattern.  In this pattern, we take the functionality that we want to make available to Alfresco and we wrap it in a service object.  This is a well established pattern in the Alfresco world, used extensively in Alfresco’s own public API.  Things like the NodeService, ActionService, ContentService, etc all take core functionality found in the Alfresco platform and wrap it in a well defined service interface consisting of a set of public APIs that return Alfresco objects like NodeRefs, Actions, Paths, etc, or Java primitives.  The service object is where all of our custom logic lives, and it provides a well defined interface for other objects to use.  In many ways the service object serves as a sort of adapter pattern in that we are using the service object to translate back and forth between the types of domain specific objects that your extension requires and Alfresco objects.  When designing a new service in Alfresco, I find it is a best practice to limit the types of objects returned by the service layer to those things that Alfresco natively understands.  If your service object method creates a new node, return a NodeRef, for example.

A custom service object on its own isn’t terribly useful, since Alfresco doesn’t know what to do with it.  This is where an Alfresco Action comes in handy.  We can use one or more Alfresco Actions to call the services that our service object exposes.  Creating an action to call the service object has several advantages.  First, once you have an Action you can easily call that Action (and thus the underlying service object) from the Javascript API (more on this in a moment).  Second, it is easy to take an Action and surface it in Alfresco Share for testing or so your users can call it directly.  Actions can also be triggered by folder rules, which can be useful if you need to call some code when a document is created or updated.  Finally, Actions are registered with Alfresco, which makes them easy to find and call from other Java or server side Javascript code via the ActionService.  If you want to do something to a file or folder in Alfresco there is a pretty good chance that an Action is the right approach.

Using the Service Action Pattern also makes it simple to expose your service object as a REST API.  Remember that Alfresco Actions can be located and called easily from the Javascript API.  The Javascript API also happens to be (IMHO) the simplest way to build a new Alfresco Web Script.  If you need to call your Action from another system (a very common requirement) you can simply create a web script that exposes your action as a URL and call away.  This does require a bit of boilerplate code to grab request parameters and pass them to the Action, which in turn will call your service object.  It isn’t too much and there are lots of great examples in the Alfresco documentation and out in the community.

So why not just bake the code into the Action itself?  Good question!  First, any project of some complexity is likely to have a group of related functionality.  A good example can be found in the AWS Glacier Archive for Alfresco project we built a couple years ago at an Alfresco hack-a-thon.  This project required us to have Actions for archiving content, initiating a retrieval, and retrieving content.  All of these Actions are logically and functionally related, so it makes sense to group them together in a single service.  If you want the details of how Alfresco integrates with AWS Glacier, you just have to look at the service implementation class, the Action classes themselves are just sanity checks and wiring.  Another good reason to put your logic into a service class is for reuse outside of Actions.  Actions carry some overhead, and depending on how you plan to use it you may want to make your logic available directly to a behavior or expose it to the Alfresco Javascript API via a root scope object.  Both of these are straightforward if you have a well defined service object.

I hope this helps you build your next awesome Alfresco platform extension, I have found it a useful way to implement and organize my Alfresco projects.

Alfresco Premier Services – New Blog

Screen Shot 2017-03-27 at 9.14.38 PM

I’m not shy about saying the best thing about my job is my team.  Never in my career have I worked with such a dedicated, skilled and fun group of people.  Whether you are talking about our management team or our individual contributors, the breadth and depth of experience across the Alfresco Premier Services team is impressive.  Members of our team have developed deep expertise across our product line, from auditing to RM, from workflow to search.  Some folks on the team have even branched out and started their own open source projects around Alfresco.  We have decided to take the next step in sharing our knowledge and launch a team blog.

The inaugural post details some recent changes to the Alfresco Premier Services offerings that coincide with the digital business platform launch.  We will follow that up in short order with new articles covering both content and process services including guidance on FTP load balancing and pulling custom property files into a process services workflow.

Lots of exciting stuff to come from the Premier Services Team, stay tuned!