Quantcast
Channel: Nuxeo Blogs » Product & Development
Viewing all 161 articles
Browse latest View live

Using Docker Containers at Nuxeo – Part 2: How to Add VNC and OpenBox

$
0
0

I started writing about Docker last week (see Using Docker Containers – Part 1 – Build a Full Fledged Nuxeo Image). Another thing I wanted to do was something similar to X Forwarding. This could become handy to debug test containers built by Jenkins for instance. Looking at different solutions, I realized there were a lot of possibilities out there. You can use SSH, VNC, Xpra, NX and more.

The quickest working solution for me was using Xvfb and VNC (I tried Xpra like on Docker’s blog but could not make it work). I had two options here. I could keep on adding layers to an existing Dockerfile or I could create a new one. I chose to keep it minimal and built a new image based on the nuxeo/nuxeo image I built in the previous post.

To make this work I installed Xvfb and VNC. And to make it nicer to test I also added the OpenBox window manager and Firefox. Once everything was installed, I setup VNC with a hard-coded password (yep, I should probably find a better way) and added a new entry to the supervisor configuration file. It simply runs the startXvfb.sh.

# Nuxeo VNC
# This image runs Nuxeo, Postgresql, SSh server, Apache and VNC.
#
# VERSION               0.0.1

FROM      nuxeo/nuxeo
MAINTAINER Laurent Doguin <ldoguin@nuxeo.com>

RUN apt-get install -y xvfb x11vnc openbox firefox

# Setup a password for vnc
RUN mkdir /.vnc
RUN x11vnc -storepasswd nuxeospirit ~/.vnc/passwd

# Expose default vnc port
EXPOSE 5900

# Add a script launching xvfb and x11vnc to supervisor configuration
RUN echo "[program:startxvfb]">> /etc/supervisor/conf.d/supervisord.conf
RUN echo "command=/bin/sh startXvfb.sh">> /etc/supervisor/conf.d/supervisord.conf

ADD startXvfb.sh startXvfb.sh

startXvfb.sh is a very simple script that starts a Xvfb session on the :1 display with the OpenBox window manager then starts the vnc server on display :1.

#!/bin/bash
Xvfb :1 -extension GLX -screen 0 1024x780x24& DISPLAY=:1 /usr/bin/openbox-session&
x11vnc -usepw -display :1

exit 0

You can build it with docker build -t nuxeo/nuxeoVNC. Just make sure you’ve already built the one it’s based on (nuxeo/nuxeo).

You can run this container like the one it’s based on:

docker run -d -P  nuxeo/nuxeoVNC

And now you can use any VNC client to connect to it. Just remember that the port is not 5900 but the one mapped by the Docker. You’ll be prompted for a password. It’s ‘nuxeospirit’, the one hard-coded on the Dockerfile.

The post Using Docker Containers at Nuxeo – Part 2: How to Add VNC and OpenBox appeared first on Nuxeo Blogs.


Nuxeo Tech Talks – APIs @Nuxeo

$
0
0

Thibaud Arnault

Last week we held a new Nuxeo Tech Talk Meetup about APIs. If you follow this blog you know that this is a topic of interest here at Nuxeo. So it was perfectly natural to hold a meetup about it. This time we invited the founders of webshell.io: Thibaud Arnault and Mehdi Medjaoui. Thibaud presented their two platforms: webshell.io and oauth.io.

Webshell is an API of APIs through a Javascript, declarative, evented, data-retrieval and aggregation gateway for HTTP APIs, built on node.js. Through Webshell, we want to help developers develop applications faster using APIs. Sort of an “API to rule them all”!

Here’s how you would create a Google Map centered on San Francisco:

var m = apis.google.maps({height: "215px"})
m.center("San francisco")
m.zoom(9)

And of course they handle all the authentication hassles for you. This is where OAuth.io comes from. They have integrated more than 90 providers so you don’t have to do it. If you have already tried it (especially the 1.0 version), then you understand the value of oAuth.io.  :-)

Then our CTO, Thierry Delprat, talked about the specifics of our API. His talk reflects the different aspects of the Nuxeo Platform, like modularity and configurability.

It makes an interesting use case to build the API in a flexible, composable fashion. This was the subject of his presentation. I invite you to look at the slides:

The next Meetup will be held on the 13th of February. We will be hosting the Docker Paris meetup.

The post Nuxeo Tech Talks – APIs @Nuxeo appeared first on Nuxeo Blogs.

[Q&A Friday] How to Upload Files and Bind Them to Documents Using the REST API

$
0
0
How to upload files to Nuxeo

How to upload files to Nuxeo

Today I have chosen a question from Bauke Roo who asks how to upload a file(s) to Nuxeo and bind it to a document using the REST API.

This is probably one of the first things someone would like to test when playing with the REST API. I thought I would post the answer on the blog so that everyone knows how to do it.

There are currently two ways to upload files. The first one is to send a blob using the standard HTTP MultiPart encoding. This is not always the best idea. You could for instance use a client that does not support MultiPart encoding or you have several files to send, but prefer to send them as separated chunks (because a web server in front of Nuxeo limits POST size).

Maybe you want to upload a file(s) as soon as possible and then run the operation when everything has been uploaded on the server, like on a mobile phone, for example. If you create a document from an image you’ve taken with your phone camera, you’ll see a progress bar. As the file starts to upload, you can fill in the rest of the metadata. Then, as soon as the upload is finished, the operation creating the document can be called. So, yes, we don’t recommend uploading blobs using HTTP MultiPart encoding.

Instead, we recommend using the batch/upload endpoint. It represents a place on the system where you can upload temporary files and do something with them later. To create a batch you have to do an HTTP POST like this:

POST http://localhost:8080/nuxeo/api/v1/automation/batch/upload
X-Batch-Id: mybatchid
X-File-Idx:0
X-File-Name:myFile.zip
-----------------------
The content of the file

Using the command line tool curl, it translates to this:
curl -H "X-Batch-Id: mybatchid" -H "X-File-Idx:0" -H "X-File-Name:myFile.zip" -F file=@myFile.zip -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/automation/batch/upload

Which will return something like:

{"uploaded":"true","batchId":"mybatchid"}

The batch has been created. Let’s see what is inside that batch:

curl -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/automation/batch/files/mybatchid

[{"name":"myFile.zip","size":5809}]

The server’s response contains the list of files and associated sizes. If you upload another file with another index, you’ll have the following:

curl -H "X-Batch-Id: mybatchid" -H "X-File-Idx:1" -H "X-File-Name:myOtherFile.zip" -F file=@myOtherFile.zip -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/automation/batch/upload

curl -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/automation/batch/files/mybatchid

[{"name":"myFile.zip","size":5809},{"name":"myOtherFile.zip","size":5819}]

As you can see a batch is a simple list of files uploaded to the server, accessible through a unique id. Now that we have some files on the server, we want to use them. Let’s create a new document using this batch.

POST http://localhost:8080/nuxeo/api/v1/path/default-domain/workspaces/myworkspace

{
  "entity-type": "document",
  "name":"myNewDoc",
  "type": "File",
  "properties" : {
    "dc:title":"My new doc",
    "file:content": {
      "upload-batch":"mybatchid",
      "upload-fileId":"0"
    }
  }
}

curl -X POST -H "Content-Type: application/json" -u Administrator:Administrator -d '{ "entity-type": "document", "name":"myNewDoc", "type": "File", "properties" : { "dc:title":"My new doc","file:content": {"upload-batch":"mybatchid","upload-fileId":"0"}}}' http://localhost:8080/nuxeo/api/v1/path/default-domain/workspaces/myworkspace

This will create a new File document with the first element of our batch (because of “upload-fileId”:”0″) in the main file field file:content. Once our batch is used, it’s automatically removed. So now we have lost our second file. To add both of them to the files schema, you have to do it like this:

 curl -X POST -H "Content-Type: application/json" -u Administrator:Administrator -d '
  { "entity-type" : "document", 
  "name" : "myNewDoc2", 
  "type" : "File", 
  "properties" : { "dc:title" : "My new doc2", 
      "files:files" : [ { "file" : { "upload-batch" : "mybatchid", 
                "upload-fileId" : "0" 
              }, 
            "filename" : "myFile.zip" 
          }, 
          { "file" : { "upload-batch" : "mybatchid", 
                "upload-fileId" : "1" 
              }, 
            "filename" : "myOtherFile.zip" 
          } 
        ] 
    } 
}' http://localhost:8080/nuxeo/api/v1/path/default-domain/workspaces/workspace

This way you have to define at least the title, path and type of the document. If you don’t want to define anything, and use the file information just like when using drag’n drop, you can use the batch/execute endpoint. This time the document is specified in the context map of the request instead of the endpoint.


curl -X POST -H "Content-Type: application/json+nxrequest" -d '{"params":{"operationId":"FileManager.Import","batchId":"mybatchid"},"context":{"currentDocument":"/default-domain/workspaces/workspace"} }' -u Administrator:Administrator http://localhost:8080/nuxeo/site/automation/batch/execute

Sometimes the document created is not exactly using the type you expected. This creation method uses the mime type of the file to set the document type. If you want more control over this, you can specify the mime type of the file when uploading it using the X-File-Type HTTP header.

curl -H "X-Batch-Id: mybatchid" -H "X-File-Idx:0" -H "X-File-Name:myVideo.mkv" -H "X-File-Type:video/x-matroska" -F file=@myVideo.mkv -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/automation/batch/upload

You now should be all set to use files with the REST API!

The post [Q&A Friday] How to Upload Files and Bind Them to Documents Using the REST API appeared first on Nuxeo Blogs.

[Q&A Friday] How to Download Files Attached to Documents Using the REST API

$
0
0
How to download files attached to a document

How to download files attached to a document

Today we have a question from Christian who asks how to read document files using the REST API. This is the perfect question considering that last week I answered how to actually upload them. Antoine started by answering something very practical: use the Nuxeo download URL. Which I will now translate for you.

Urls have to be in the following form:

http://<server>:<port>/nuxeo/nxbigfile/default/<doc_id>/files:files/<file_index>/file/<file_name>

Which would, for instance, give something like

http://my.server.com:8080/nuxeo/nxbigfile/default/b54c8b41-86c9-4c9b-bfe0-e6b1ca01313f/files:files/1/file/NUXEO_User%20stories.pdf

Note that here we use nxbigfile instead of nxfile (classically used from the webapp) because it’s more efficient on big files download. Beware, the index of the first file is 1, second is 2, etc…

But this is not the only solution available. As usual there are many.

You can still use the REST API and the operation adapter. First you need to get the Document URL. It could be something like http://localhost:8080/nuxeo/api/v1/id/5c911f4d-4627-4587-9ead-9b0cd1bc3dbc/. When you have a resource, you can use adapters to do something with this resource. Here we can use the @op adapter followed by the id of the operation to use. The URL will then look like http://localhost:8080/nuxeo/api/v1/id/5c911f4d-4627-4587-9ead-9b0cd1bc3dbc/@op/Blob.Get.

Beware, to have an appropriate server answer, you need to use HTTP POST instead of GET, because you’re calling an operation and it requires some parameters. Well in our case there is a default parameter (‘file:content’), but still, you need to send an empty params map. Using the curl command line tool, it would look like this:


curl -O -X POST -H "Content-Type: application/json+nxrequest" -d '{"params":{}}' -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/id/5c911f4d-4627-4587-9ead-9b0cd1bc3dbc/@op/Blob.Get

Do not forget the -O parameter or else curl will display the content of the file to your console directly. There are other operations available to retrieve blobs. Here I used Blob.Get but you can also use Blob.GetList to get a list of blobs or Blob.GetAll to get all the blobs attached to the document. Blob lists are usually returned as a zip file.

The post [Q&A Friday] How to Download Files Attached to Documents Using the REST API appeared first on Nuxeo Blogs.

Writing a JIRA Plugin to Integrate Segment.io

$
0
0

I recently started writing a JIRA plugin for Nuxeo and thought I would share some thoughts about the process.

First, let me talk about why a JIRA Plugin. We recently started using segment.io. It’s an analytics tool to rule them all (sound familiar?). Basically you replace all your different analytics tools (Google Analytics, Marketo, etc.) with segment.io. It lets you record what you want and then forward it to other existing tools (again, Google Analytics, Marketo, etc.).

We use JIRA for our support tickets. We wanted to get some stats about them so we started using segment.io in JIRA. So you get the idea, I needed to put a segment.io tracker on JIRA – which is basically putting some JavaScript on every JIRA page.

But contrary to Confluence, you can’t just put some custom code in every page header from the administration tab. It’s a little more complicated than that. You can either override a JSP template directly on a running JIRA instance (….) or you can build a plugin. I chose the plugin approach and I’m going to share this with you today.

The Atlassian SDK

The first thing to do when you want to create a plugin for any Atlassian product is download their SDK. What’s in the SDK you ask? It contains Apache Maven, a pre-filled Maven repository and a set of scripts. Those scripts wrap many maven commands to hide some of the complexity and give a consistent experience. And that’s all. I like how they use standard technologies but made it easier for newcomers thanks to their custom scripts. If you are a maven guru you probably don’t need them. But…

iarnomavenguru

Setting up the SDK is easy, you just have to put the scripts in your PATH environment variable. Then you have commands to easily create some plugin projects (probably based on maven archetype) like atlas-create-jira-plugin-module. This will generate the skeleton of the plugin. Next, go into this plugin folder and run atlas-run. This will start a JIRA development server instance with the generated plugin already deployed. You can verify this easily by going to the administration interface.

The next logical step is to import this code into your favorite IDE. I personally use Eclipse most of the time. Usually I run mvn eclipse:eclipse to generate the files needed by Eclipse, but here I need to use the Maven embedded by the SDK. You can use the atlas-mvn command for that. It’s just like the usual mvn but using the version from the SDK.

Now the code is in Eclipse, I can start coding.

A Simple Plugin

Again what I want to do is have some JavaScript in the header of every JIRA page to use segment.io. To do so I need to add what Atlassian calls a Web Panel. It’s essentially a template rendered in a predefined location. I could not find an exhaustive list of locations but when I browsed JIRA’s code (that was in the target folder of my plugin thanks to the atlas-run command), I found the atl.header.after.scripts which was exactly what I needed. I want to add my Javascript code after the regular Javascripts because I am going to use jQuery.

Then I created a Velocity template in /templates/segmentio/segmentio-tracker.vm. For all Web Panel you declare, you can associate a Context Provider. This is a Java Class implementing the ContextProvider interface and responsible for filling the velocity context of your panel.

All this information needs to be declared in a web-panel tag in the atlassian-plugin.xml file. This is the descriptor of your module, the main entry point of all the extensions/modifications you can do to an Atlassian product. Here’s how my first descriptor looked:

<?xml version="1.0" encoding="UTF-8"?>
<atlassian-plugin key="${project.groupId}.${project.artifactId}"
	name="${project.name}" plugins-version="2">
	<plugin-info>
		<description>${project.description}</description>
		<version>${project.version}</version>
		<vendor name="${project.organization.name}" url="${project.organization.url}" />
		<param name="plugin-icon">images/pluginIcon.png</param>
		<param name="plugin-logo">images/pluginLogo.png</param>
	</plugin-info>

	<web-panel name="segment.io" i18n-name-key="segment-.io.name"
		key="segment-.io" location="atl.header.after.scripts" weight="1000">
		<context-provider class="com.nuxeo.segmentio.SegmentIOTracker" />
		<resource name="view" type="velocity" location="/templates/segmentio/segmentio-tracker.vm" />
	</web-panel>

</atlassian-plugin>

At this point my context provider is not doing anything. And my template is simply holding the default JavaScript from the Segment.io documentation. Nothing crazy. But since I went to the trouble of writing an actual plugin, I thought I would make more of it and make it configurable/reusable. My goal has changed and is now to add a panel in the Administration interface to let the administrator give his Segment.io API key and choose if he wants to count user logins or not. So I need to figure out how to add a link and a panel in the Administration interface.

Links, a bit like actions in Nuxeo, are represented by what they call a Web Item. The most import concepts of Web Items are the section and the weight. The section is the future placement of your link, like categories for actions in Nuxeo, and the weight defines the order of the links in the same section. And of course you have the link tag which defines where the web item should link to. Here, it goes to SegmentIOConfigAction.jspa. It’s actually an alias defined in the segmentIOConfigAction webwork configuration.

A webwork defines a URL-addressible ‘action’, allowing JIRA’s user-visible functionality to be extended or partially overridden.

To keep on the Nuxeo analogy, it’s the seam bean backing my XHTML template. Except here it’s a simple class and injection is handled in the constructor using Spring. This class will handle the /templates/segmentio/config.vm velocity template. To persist the information retrieved by the webwork, I’ll create a new SegmentIOConfig service using the component tag. The goal of this service is to store my plugin configuration. And JIRA already has a service called PluginSettingsFactory that will help on this matter. So to make sure I’ll have this PluginSettingsFactory service, I’ll add it to my descriptor thanks to the component-import tag.

In the end, after all this wiring, my atlassian-plugin.xml file looks like this:


        <!-- Declare my new configuration service to hold the conf parameters-->
        <component key="segmentioService" name="SegmentIO Configuration Service"
          class="com.nuxeo.segmentio.config.SegmentIOConfig" />

        <!-- Import the PluginSettingsFactory to make sure it's available in my service. -->
        <component-import key="pluginSettingsFactory"
          interface="com.atlassian.sal.api.pluginsettings.PluginSettingsFactory" />

        <!-- Creates a new link in the Jira Administration interface -->
	<web-item key="segmentIOConfigActionLink" section="admin_plugins_menu/integrations_section"
		i18n-name-key="com.nuxeo.segmentio.config.adminLink" name="Configure SegmentIO"
		weight="1">
		<label key="com.nuxeo.segmentio.config.adminLink" />
		<link linkId="segmentIoActionLink">/secure/admin/SegmentIOConfigAction.jspa</link>
	</web-item>

        <!-- Declare the configuration form used to retrive segment.io parameters -->
	<webwork1 key="segmentIOConfigAction" name="SegmentIO Config Action">
		<actions>
			<action name="com.nuxeo.segmentio.config.SegmentIOConfigAction"
				alias="SegmentIOConfigAction">
				<view name="success">/templates/segmentio/config.vm</view>
				<view name="input">/templates/segmentio/config.vm</view>
			</action>
		</actions>
	</webwork1>
        <!-- the internationalization resources -->
	<resource type="i18n" name="i18n" location="i18n.messages" />

On to the actual coding part! Let’s start by explaining the SegmentIOConfig, responsible for the persistence of the configuration options. It’s a very simple class. Notice that the final PluginSettingsFactory field is instantiated automatically through the constructor. This works because of the component-import tag of the descriptor. Then I simply added a getter and a setter for each element of my configuration: the segment.io API key and a boolean value to activate user login tracking. The PluginSettingsFactory service takes care of everything as you can see in the source code:

package com.nuxeo.segmentio.config;

import com.atlassian.sal.api.pluginsettings.PluginSettingsFactory;

public class SegmentIOConfig {

	final PluginSettingsFactory pluginSettingsFactory;

	String SEGMENT_IO_CONFIG_KEY = "com.nuxeo.segmentio.config.apikey";

	String SEGMENT_IO_CONFIG_TRACK_LOGIN = "com.nuxeo.segmentio.config.trackLogin";

	public SegmentIOConfig(PluginSettingsFactory pluginSettingsFactory) {
		this.pluginSettingsFactory = pluginSettingsFactory;
	}

	public void storeSegmentIOKey(String value) {
		pluginSettingsFactory.createGlobalSettings().put(SEGMENT_IO_CONFIG_KEY,
				value);
	}

	public String getSegmentIOKey() {
		Object apiKey = pluginSettingsFactory.createGlobalSettings().get(
				SEGMENT_IO_CONFIG_KEY);
		if (apiKey != null && apiKey instanceof String) {
			return (String) apiKey;
		} else {
			return null;
		}
	}

	public void storeTrackLogin(boolean trackLogin) {
		if (trackLogin) {
			pluginSettingsFactory.createGlobalSettings().put(
					SEGMENT_IO_CONFIG_TRACK_LOGIN, "true");
		} else {
			pluginSettingsFactory.createGlobalSettings().put(
					SEGMENT_IO_CONFIG_TRACK_LOGIN, "false");
		}
	}

	public Boolean getTrackLogin() {
		return Boolean.valueOf((String) pluginSettingsFactory
				.createGlobalSettings().get(SEGMENT_IO_CONFIG_TRACK_LOGIN));
	}
}

About the WebWork backing class; the constructor has a SegmentIOConfig instance as parameter. Again this works because of the component tag of the descriptor. The wiring is made by Spring. The apiKey, trackLogin and trackLoginSelect fields are in the config.vm template. The doExecute method does not do anything except return success. It’s called when the template is displayed after the link has been clicked. The doUpdate on the other hand, is called when the user clicks on the Save button of the form. The code is very simple as you can see:

package com.nuxeo.segmentio.config;

import com.atlassian.jira.web.action.JiraWebActionSupport;

public class SegmentIOConfigAction extends JiraWebActionSupport {
	private SegmentIOConfig config;
	private String apiKey;
	private boolean trackLogin;
	private String[] trackLoginSelect;

	public SegmentIOConfigAction(SegmentIOConfig config) {
		this.config = config;
		this.apiKey = config.getSegmentIOKey();
		this.trackLogin = config.getTrackLogin();
	}

	@Override
	protected String doExecute() throws Exception {
		return SUCCESS;
	}

	public String doUpdate() {
		config.storeSegmentIOKey(apiKey);
		if (trackLoginSelect != null ) {
			trackLogin = true;
		} else {
			trackLogin = false;
		}
		config.storeTrackLogin(trackLogin);
		return getRedirect("SegmentIOConfigAction.jspa");
	}

	public String getApiKey() {
		return apiKey;
	}

	public void setApiKey(String apiKey) {
		this.apiKey = apiKey;
	}

	public String[] getTrackLoginSelect() {
		return trackLoginSelect;
	}

	public void setTrackLoginSelect(String[] trackLoginSelect) {
		this.trackLoginSelect = trackLoginSelect;
	}

	public boolean isTrackLogin() {
		return trackLogin;
	}

	public void setTrackLogin(boolean trackLogin) {
		this.trackLogin = trackLogin;
	}

}

Here’s the Velocity (can’t say I am a fan, it feels so old…) template associated to the WebWork:

<html>
  <head>
    <title>$i18n.getText("com.nuxeo.segmentio.config.title")</title>
    <meta name="decorator" content="atl.admin">
  </head>
  <body>
    <table width="100%" cellspacing="0" cellpadding="10" border="0">
      <tbody>
        <tr>
          <td>
            <table class="jiraform maxWidth">
              <tbody>
                <tr>
                  <td class="jiraformheader">
                    <h3 class="formtitle">$i18n.getText("com.nuxeo.segmentio.config.title")</h3>
                  </td>
                </tr>
                <tr>
                  <td class="jiraformbody">
                    <p> $i18n.getText("com.nuxeo.segmentio.config.instructions")</p>
                    <form method="post" action="SegmentIOConfigAction!update.jspa">
                      <p>
                        <table>
                          <tr>
                            <td>$i18n.getText("com.nuxeo.segmentio.config.apiKeyCell")</td>
                            <td>
                              <input type="text" name="apiKey" #if ($!apiKey) value="$apiKey" #end />
                            </td>
                          </tr>
                          <tr>
                            <td>$i18n.getText("com.nuxeo.segmentio.config.trackLoginCell")</td>
                             <td><input type="checkbox" name="trackLoginSelect" id="trackLoginSelect" #if ($trackLogin) checked='checked' #end/>
                            </td>
                          </tr>
                          <tr>
                            <td colspan="2">
                              <input type="submit" value="$i18n.getText('com.nuxeo.segmentio.config.applyButton')">
                            </td>
                          </tr>
                        </table>
                      </p>
                    </form>
                  </td>
                </tr>
              </tbody>
            </table>
            <p></p>
          </td>
        </tr>
      </tbody>
    </table>
  </body>
</html>

Now with all of this I have an administration panel for my plugin:

Jira Segment.io Configuration Panel

It’s good to have all this information but I need to put them in my header template. To make them available, I need to use the ContextProvider associated to the template. It’s a simple Java class implementing the ContextProvider interface. Again the constructor is used to ‘inject’ SegmentIOConfig and JiraAuthenticationContext. The latest will give information on the current logged in user. Everything I needed it added to a Map returned by the getContextMap method. That’s the one I’ll be able to use in my Velocity template.

package com.nuxeo.segmentio;

import java.util.Map;

import com.atlassian.jira.component.ComponentAccessor;
import com.atlassian.jira.config.properties.APKeys;
import com.atlassian.jira.config.properties.ApplicationProperties;
import com.atlassian.jira.security.JiraAuthenticationContext;
import com.atlassian.jira.user.ApplicationUser;
import com.atlassian.jira.util.JiraVelocityUtils;
import com.atlassian.jira.util.collect.MapBuilder;
import com.atlassian.plugin.PluginParseException;
import com.atlassian.plugin.web.ContextProvider;
import com.nuxeo.segmentio.config.SegmentIOConfig;

public class SegmentIOTracker implements ContextProvider {

	private final JiraAuthenticationContext authenticationContext;

	private final SegmentIOConfig segmentIOConfig;

	private Map<String, String> params;

	public SegmentIOTracker(JiraAuthenticationContext authenticationContext,
			SegmentIOConfig segmentIOConfig) {
		this.authenticationContext = authenticationContext;
		this.segmentIOConfig = segmentIOConfig;
	}

	@Override
	public void init(Map<String, String> params) throws PluginParseException {
		this.params = params;
	}

	@Override
	public Map<String, Object> getContextMap(Map<String, Object> context) {
		final MapBuilder<String, Object> paramsBuilder = MapBuilder
				.newBuilder(JiraVelocityUtils.getDefaultVelocityParams(context,
						authenticationContext));
		paramsBuilder.addAll(params);
		Boolean trackLogin = segmentIOConfig.getTrackLogin();
		if (trackLogin) {
			ApplicationUser user = authenticationContext.getUser();
			if (user != null) {
				paramsBuilder.add("username", user.getUsername());
				paramsBuilder.add("name", user.getDisplayName());
				paramsBuilder.add("email", user.getEmailAddress());
			}
		}
		ApplicationProperties applicationProperties = ComponentAccessor
				.getApplicationProperties();
		String baseUrl = applicationProperties.getString(APKeys.JIRA_BASEURL);
		paramsBuilder.add("baseUrl", baseUrl);
		paramsBuilder.add("segmentIOKey", segmentIOConfig.getSegmentIOKey());
		return paramsBuilder.toMap();
	}

}

And here it is, finally, the template to add segment.io to JIRA. It starts with a null or empty test on the segmentIOKey value. If a key has been provided, than the JavaScript library can be initiated. If the current page is an issue, than the page method is called with the project key as first argument. The first argument in the JavaScript page method is actually treated as a category. The second argument is the key and the title of the issue.

Then if the trackLogin box has been checked, the current user, if there is one, is identified and we send a Login event to segment.io if he just logged in.

#if( $!segmentIOKey != "")
<script type="text/javascript">
window.analytics || (window.analytics = []);
window.analytics.methods = ['identify', 'track', 'trackLink', 'trackForm',
'trackClick', 'trackSubmit', 'page', 'pageview', 'ab', 'alias', 'ready',
'group', 'on', 'once', 'off'];
window.analytics.factory = function (method) {
  return function () {
    var args = Array.prototype.slice.call(arguments);
    args.unshift(method);
    window.analytics.push(args);
    return window.analytics;
  };
};

for (var i = 0; i < window.analytics.methods.length; i++) {
  var method = window.analytics.methods[i];
  window.analytics[method] = window.analytics.factory(method);
}

window.analytics.load = function (apiKey) {
  var script = document.createElement('script');
  script.type = 'text/javascript';
  script.async = true;
  script.src = ('https:' === document.location.protocol ? 'https://' : 'http://') +
                'd2dq2ahtl5zl1z.cloudfront.net/analytics.js/v1/' + apiKey + '/analytics.min.js';

  // Find the first script element on the page and insert our script next to it.
  var firstScript = document.getElementsByTagName('script')[0];
  firstScript.parentNode.insertBefore(script, firstScript);
};

window.analytics.SNIPPET_VERSION = '2.0.8';
window.analytics.load('$segmentIOKey');

var key = jQuery('#key-val').attr("data-issue-key");
if (key) {
  var projectKey = key.split("-")[0];
  var summary = jQuery("#summary-val").text();
  window.analytics.page(projectKey, key + " - " + summary);
} else {
  window.analytics.page();
}

#if( $trackLogin )
if ('$baseUrl'.indexOf(document.location.host) != -1) {
  if ((document.referrer.indexOf("login.xml")  != -1 )|| (document.referrer.indexOf("login.jsp") != -1)) {
    window.analytics.identify('$email', {
     email: '$email',
     jira_username: '$username',
     jira_name: '$name',
     jira_last_login: Date.now()
    });
    window.analytics.track('Jira Login');
  }
}
#end

</script>
#end

The full source code of the plugin is available on Github.

The post Writing a JIRA Plugin to Integrate Segment.io appeared first on Nuxeo Blogs.

Using Docker as a Jenkins Cloud Provider

$
0
0

DockerAs you know, I’ve started experimenting with Docker. One of the cool use cases we wanted to get in was provisioning Jenkins slaves through a Docker host.

Jenkins is our continuous integration server. It runs many different jobs to build and test the various parts of the platform. These jobs are usually run on dedicated slave nodes. We have different physical machines for each but sometimes we need more nodes. We already use Amazon EC2 instances to provision nodes. All the setup required is of course completely specific to Amazon. It would be nice to have a more ‘generic’ setup.

This is when the Jenkins Docker plugin comes in handy. You can use it to provision Jenkins slaves on any Docker host. And of course you can install Docker on Amazon EC2 or anywhere else. There are already several Ansible playbooks that can help you install Docker on remote machines. Just make sure that Docker runs on TCP with the following option: -H tcp://127.0.0.1:4243. This will be used by Jenkins to manage the different images and containers.

To set up your host(s), go to the configure tab in the Cloud section. The Docker option will be available along with other cloud providers like EC2, Jclous, Virtualbox, etc… If you select Docker you’ll be first asked for a name and a URL. My Docker host is on a machine called docker and the TCP port is 4243. My configuration looks like this:

Docker Jenkins Cloud Provider

Once you have set up a host, you can set up different images to be used as Jenkins slaves. Here’s the list of parameters:

  • ID – the id of the image you want to use
  • Labels – the Label used to identify your node
  • Credentials – the credentials used to connect to the docker image using SSH
  • Remote Filing System Root – the home folder of the user used by Jenkins
  • Tag on Completion – if true, this will create a docker image for each build you run using the name of the job as the repository id and the build number as tag
  • Instance Cap – how many instances you want to run at the same time
  • DNS – the DNS server to use in your Docker image

If you have followed closely, you have realized that the Docker image to use will need to have at least SSH installed as well as a user to login. To build my image, I’ve adapted an Ansible playbook that we use to build the EC2 images for our Jenkins slaves, but it’s really easy to build one yourself. Just take a look at the plugin page, the author explains everything you need to do.

All of this makes it really easy to have Jenkins Slaves on demand on any host you want as long as Docker is installed on it. And again, Ansible can be really helpful to put Docker on any remote machine. This way you get a ‘generic’ on demand slave setup. But this is not the only reason I find using Docker appealing.

One of the issues we have with EC2 is that the instances we use are automatically removed when the job is done. Sometimes we would like to use them again to better see what went wrong on a failed Job. If you have ticked the Tag on Completion box in your configuration, all job results are still accessible. Try running docker images on your Docker host. You will see a list of images with the name of the job as REPOSITORY and the build number as TAG:

REPOSITORY                 TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
nuxeo-master               #55                 e230007c2a40        51 minutes ago      2.506 GB
nuxeo-master               #54                 42a7f8af7d50        52 minutes ago      2.506 GB
nuxeo-master               #53                 680336ec5a9d        About an hour ago   2.506 GB
nuxeo-master               #52                 c611cc87e810        About an hour ago   2.506 GB
docker-test                #22                 4c4e883e76f0        About an hour ago   2.491 GB
docker-test                #21                 9c8c393f8ebb        About an hour ago   2.491 GB
...

It means that now you can do something like docker run -t -i -P nuxeo-master:#55 /bin/bash. This will open a bash session on the image used to run the job and tagged at the end of said job. If you look at the previous blog I wrote about Docker, it becomes even better. Because if you have set up something like VNC on your image, you can run the image as deamon and then connect to it with your VNC client. But that’s only if you’re not comfortable enough with a bash session :-)

Now all of this is really neat, but there is one small thing missing for me. Right now all the images are kept indefinitely. This can take a lot of disk space pretty quickly so it would be nice to be able to say something like ‘keep only the 10 latest images of the same job’. Of course you can always have a script running on your Docker host to do the cleanup but it requires setting up additional configuration on the host.

Keep in mind that this is only the first version of the plugin. I guess many other cool features are coming.

The post Using Docker as a Jenkins Cloud Provider appeared first on Nuxeo Blogs.

[Q&A Friday] How to Send Multiple Document Links in an Automation Notification Email

$
0
0
How to Send Multiple Document Links in an Automation Notification Email

How to Send Multiple Document Links in an Automation Notification Email

Today we have a question from nicoespinoza who asks how to send selected multiple document links from an email notification template. When you use the Send E-Mail operation, it will send an email for each document given as input. So if you want to send only one email containing several document links, you can’t give all the selected documents as input. So the first thing to do here is put your selected documents in a context variable and give only one document as input to the Send E-Mail operation. Here’s how I did it:

I use the Fetch Document operation with the ‘/’ parameter to have only one document as the input of the Send E-Mail operation. The ‘/’ parameter identifies the root document of your repository.

Operation Chain

The next question would be how to get the document links in the email template. A very simple email template displaying only the title and link to each document would look like this:

<#list selectedDocuments as document>
 <a href="${baseUrl}nxdoc/${Session.getRepositoryName()}/${document.id}/${viewId}">${document.title}</a>
</#list>

If you don’t know what those tags are, it’s FreeMarker, a template engine used in many places in Nuxeo. If you look at the operation chain screenshot above, you see that the selected documents from the UI are stored in the ‘selectedDocuments’ context variable. So I can use the #list FreeMarker tag to iterate over the list of selected documents.

To get the document URL, there is currently no built-in method. We have to generate it ourselves. Fortunately, we have everything we need and it’s quite simple. A default URL in Nuxeo is made of the base URL (http://yourServer/nuxeo/), followed by nxdoc which is the name of the default URLCodec, followed by the name of the repository where the document is stored, followed by the id the document, and then the viewId.

Again we have everything needed in the FreeMarker context:

<a href="${baseUrl}nxdoc/${Session.getRepositoryName()}/${document.id}/${viewId}">${document.title}</a>

For me ${baseUrl} will render http://localhost:8080/nuxeo/, ${Session.getRepositoryName() will render default, ${document.id} will render something like 9811fa6f-5e42-46cb-ac4a-af341f1dab15 and ${viewId} will render view_documents.

This should be all you need to know to send notification emails containing selected document links using Content Automation.

The post [Q&A Friday] How to Send Multiple Document Links in an Automation Notification Email appeared first on Nuxeo Blogs.

Studio 2.17 Has Been Released: User Experience as a Priority

$
0
0

As we had announced at the Nuxeo World, our priority in 2014 is user experience. Studio is now quite mature in terms of what you can configure with it and deserves improvements in usability. This release is the first one in a series that will address usability. The full release notes are available in the Studio documentation space.

Quicker Loading

Nuxeo Studio-Referencing_a_csv_file_for_vocabulary_import

Importing a CSV file for referencing vocabularies

Some customer projects get to be very big, with hundreds of objects. Some workflow definitions are also themselves getting very big, with dozens of nodes and as a consequence, the loading time of the project was getting longer and longer. Anahide took a series of measures for preventing this problem: improved serialization of the project definition on the client side (remember Studio is written in GWT, most of it happens client side), lazy loading of the biggest features (workflow, automation chain, vocabularies), optimization of the vocabularies feature, with a new “CSV reference” mode, see just after.

Referencing Vocabularies in a Separate File

You can now upload a CSV file as a resource and reference it as the source of the values for your vocabularies, instead of displaying it as a tree in Studio. That option is much more robust for larger datasets (some customers upload vocabularies with tens of thousands of lines, causing the server-side controls to take too much time).

MVEL Expression Validation

Screenshot of an example of MVEL validation error

MVEL expressions are dynamically validated

The MVEL expressions that are used are checked server side. If you make a typo, a syntax error, or forget a bracket, Nuxeo Studio now tells you there is a problem (it displays what you would have seen in the logs deploying and testing your automation chain). We hope this enhancement will save you a lot of time! Note that it doesn’t evaluate the expression though, as that would require more context information. At least you are sure your expression is valid and can be built at runtime. The implementation of this feature was interesting, as we discovered an optimization problem with the way MVEL engine was evaluating some of the expressions (particularly the ternary expressions), leading to an exponential time increase of the validation. A patch was submitted by Florent to the MVEL project and has been accepted.

Syntax Highlighting and Code Suggestion in Studio

Code Mirror is being integrated in Studio, bringing to the users great new features: syntax coloration, suggestion, completion

CodeMirror in Studio offers syntax highlighting, suggestions, and auto-completion

Integration of CodeMirror in Studio has been started. You can check it out on the Theme feature when editing a CSS, on the XML feature and in the in-line JavaScript definition of the Select2 widgets. CodeMirror offers syntax highlighting, code suggestion, and some auto-completion (for instance if you type <action>, it will automatically add the closing tag </action>). Suggestion appears by typing ctrl+space. That is a great step forward in the implementation of the auto-suggestion in Studio. Stay tuned, more will come on this topic in the next releases, such as NXQL validation. This is very exciting!

Documenting Automation Chains

When you use the often unknown “documentation” feature for commenting a chain, it is now automatically unfolded, so as to make it more visible. Actually, in one of the next releases, we will make this documentation placeholder available on all the feature instances of your project.

Bug Fixes

Some bugs were fixed as usual, like double creation of objects when validating the creation with keyboard and missing properties of some of the newest widgets.

Improved Automation Chain Screen Coming

Lise prepared mockups for Automation screen improvements. The goal is to improve readability of the chains, especially when they get complex — you want to be able to read the parameters values easily. Do not hesitate to comment on the ticket, this should be implemented in the next release, around mid-April!

The post Studio 2.17 Has Been Released: User Experience as a Priority appeared first on Nuxeo Blogs.


Meetup – Docker @Nuxeo

$
0
0

DockerLast week we hosted the fifth Docker Paris Meetup. It’s nice to see this topic attracting so many people. Most of the attendees were developers. There were a few ops and/or devops. This gives us a pretty good idea of the changes happening in that sector. Developers are taking back more and more control over the complete life cycle of an application. It’s especially true for deployment phases thanks to tools like Docker and Ansible.

DockerMeetup@NUxeoWe had three different talks: one about boot2docker by its creator Steeve Morin, another one about Docker in the enterprise world by Adrien Blind and Arnaud Mazin from Octo, and a third about the architecture we foresee for nuxeo.io by Damien Metzler.

Boot2docker is the recommended way to use Docker on Mac OS. It became official with their latest 0.8 release. Steeve gave us a nice overview of all the processes he went through, starting from a weekend project to the most optimized way to run Docker anywhere.

The Octo guys gave us a very complete presentation on how Docker fits in traditional enterprise workflow/software factories. They showed us a series of scenarios for where and how you could integrate it.

Damien talked about the PaaS architecture we are planning for nuxeo.io. It’s based on CoreOS and Flynn layer 0.

The post Meetup – Docker @Nuxeo appeared first on Nuxeo Blogs.

Nuxeo Platform 5.9.2 is Available!

$
0
0

Nuxeo Platform 5.9.2
We just released Nuxeo Platform 5.9.2. It’s a Fast Track version. Note that Fast Track versions are only supported until the next FT (Check out this blog post about our release life cycle).

Take a look at the release notes for the whole story. As usual this new release was preceded by a Nuxeo Studio release, you can read the post Alain wrote to get all the details.

Release 5.9.2 brings UX enhancements to Nuxeo DAM. Nuxeo Drive is now easier to install and upgrade because the installer is signed and the local database upgrade is done automatically. Guillaume added some interesting features to the Select2-based suggestion widgets. You’ll find this particularly useful if you’re a Nuxeo Studio user.

If you’re a developer, you’ll be happy to know that this new version is the first one to be built with Maven3. It can also be run with the JDK 8. We added a private marketplace channel to let users upload their own marketplace packages to Connect, making them visible and available in the Admin Center. Also, Arnaud worked on oAuth 2.0 support to make Nuxeo an oAuth 2.0 service provider.

You can download Nuxeo Platform 5.9.2 from our website.

Now, we are on to Fast Track 5.9.3 which should be released  in mid-April.

The post Nuxeo Platform 5.9.2 is Available! appeared first on Nuxeo Blogs.

Private Marketplace Channel: Deploying Custom Developments

$
0
0

We just released Nuxeo Platform 5.9.2, a Fast Track version. One of the new features is a private channel on our Marketplace. We made a short 4-minute video to show you how it works. And if you prefer reading, you can take a look at the documentation.

The private Marketplace channel is available to any Nuxeo Connect user. Nuxeo Connect trial users can also use it. It works on Nuxeo Platform versions 5.8 and greater.

About the Marketplace package: it contains simple XML configuration, Java code, configuration templates, libraries, etc. All you have to do is respect the Nuxeo Marketplace package format.…

The post Private Marketplace Channel: Deploying Custom Developments appeared first on Nuxeo Blogs.

[Q&A Friday] How to Manage Users with the REST API

$
0
0
How to Manage User with the REST API

How to Manage Users with the REST API

Today we have a question from Christian who asks how to create a user using the REST API. Using this API, you’ll have access to several resources endpoints (document, user, group, automation) as well as several adapters (children, search, page provider, ACL, audit, business objects..).

So here’s how to use the user endpoint.

Get a User

If I want to get information about a particular user, I can simply go to the following URL: http://localhost:8080/nuxeo/api/v1/user/ldoguin

This will return a JSON answer looking like this:

{ "entity-type" : "user",
  "extendedGroups" : [ { "label" : "Administrators group",
        "name" : "administrators",
        "url" : "group/administrators"
      } ],
  "id" : "ldoguin",
  "isAdministrator" : true,
  "isAnonymous" : false,
  "properties" : { "company" : "Nuxeo",
      "email" : "ldoguin@nuxeo.com",
      "firstName" : "Laurent",
      "groups" : [ "administrators" ],
      "lastName" : "Doguin",
      "password" : "",
      "username" : "ldoguin"
    }
}

Take a close look at the object returned, it’s important for what’s coming next. What’s interesting here is the entity-type property set to user.

Create a User

To create a new user, you have to send a POST request with the following data to this URL:

http://localhost:8080/nuxeo/api/v1/user/

Using curl, it would look like this:

curl -X POST -H "Content-Type: application/json" -u Administrator:Administrator -d "{ \"entity-type\": \"user\", \"id\":\"psteele\", \"properties\":{\"username\":\"psteele\", \"email\":\"psteele@greenman.com\", \"lastName\":\"Steele\", \"firstName\":\"Peter\", \"password\":\"psteele\" } }" http://localhost:8080/nuxeo/api/v1/user

The JSON answer should look like this:

{ "entity-type" : "user",
  "id" : "psteele",
  "extendedGroups" : [  ],
  "isAdministrator" : false,
  "isAnonymous" : false,
  "properties" : { "company" : "Green Man",
      "email" : "psteele@greenman.com",
      "firstName" : "Peter",
      "username" : "psteele",
      "groups" : [  ],
      "lastName" : "Steele",
      "password" : "",
      "company": null
    }
}

Be very careful about the escaping. Again the entity-type property is set to user. This is why Christian had an issue in the first place, as he was using NuxeoPrincipal as entity-type.

Modify a User

I forgot to add the company of the user during its creation. It’s easy to modify. Here’s another curl example:

curl -X PUT -H "Content-Type: application/json" -u Administrator:Administrator -d "{ \"entity-type\": \"user\", \"id\":\"psteele\", \"properties\":{\"company\":\"SUPERGREEN\"}}" http://localhost:8080/nuxeo/api/v1/user/psteele

First big difference with the previous call is that you need to add the user id at the end of the URL. It’s a resource endpoint so you need to identify that resource. Second difference – we use PUT for modification instead of POST. And last, you don’t need to put all existing information, you can simply add the properties to modify. Just know that the id is mandatory.

Delete a User

Since this was just for test/demonstration purposes, let’s remove that user:

curl -X DELETE -u Administrator:Administrator http://localhost:8080/nuxeo/api/v1/user/psteele

As you can see this is much simpler. You still have the user id at the end of the URL. There is no JSON needed in the request body, just make sure you use DELETE.

The post [Q&A Friday] How to Manage Users with the REST API appeared first on Nuxeo Blogs.

Authenticating to the Nuxeo Platform with OAuth.io

$
0
0

OAuth.ioToday, we will discuss how you can start an application using OAuth.io‘s Nuxeo provider. It allows you to easily integrate OAuth2 authentication with the quick and robust OAuth.io.

You might already know OAuth.io as we invited its creators to participate in a previous Nuxeo Tech Talk Meetup.

First, you need to create a free account on OAuth.io. As you might know, OAuth2 client authorization works with a ClientId/ClientSecret and they need to be registered in your Nuxeo server. Download and start a fresh Nuxeo Platform 5.9.2 with the nuxeo-dm package installed. Then go to the Admin Center, OAuth/OpenSocial and Consumers tab. Register a new client.

We are going to make some AJAX calls between different domains but before going further you need to take care of CORS mechanism. If you are not very confident with it, I recommend you simply deploy our contribution described in our documentation.

With your clientId and clientSecret, you’ll have to set them up in your OAuth.io Nuxeo provider configuration page, like below:

Screenshot 2014-02-25 12.03.14

Beware – OAuth2 is sharing the token as an HTTP Header, so you must enable HTTPS on your server.

If everything goes well, from your OAuth.io key manager click the “try auth” button and you should see an access token in the result box.

Now, we can play with the magic OAuth.io library to use this authentication in our app.


Test OAuth.io with Nuxeo
<meta charset="utf-8" />

<script type="text/javascript">// <![CDATA[
    OAuth.initialize('my_oauth.io_public_key');

    $(function() {
      $("#popup").click(function() {
        OAuth.popup('nuxeo', function(error, result) {
          $("#token").html(result.access_token);
        });
        return false;
      });
    });

// ]]></script></pre>
<h1>My access token: <span id="token"></span></h1>
<pre>
 <a href="#" id="popup">Open Oauth</a>

As soon as you get an access token, you just pass it as an HTTP Header like this:

Authorization: Bearer {access_token}

You will be able to make requests to Nuxeo using our JavaScript client… but that’s another story…

The post Authenticating to the Nuxeo Platform with OAuth.io appeared first on Nuxeo Blogs.

Export Data with Content Automation

$
0
0

At Nuxeo, we do internal reporting using a BI tool that fetches data from a PostgreSQL database used as the internal data warehouse. Many sources are filled in this database. Some of this information comes from Connect. In Connect, you find objects controlling subscribed online services, customers entities, studio projects, applications descriptions and a few other entities.

To fill the internal data warehouse with this data,  I could have used the Nuxeo – Mule ESB connector, or integrated at a much lower level to the database. But this time, to export data from Connect, I wanted to play with a pure automation solution, that if not as simple as a BIG SQL join, at least only requires HTTP Port 80 communication between the two systems and uses a maintained API. Plus it allows for very easy reformatting of data, and is a good example to teach you a few things about using Automation!

I wrote an automation chain that flattens all this data into a CSV file. The file is then loaded in a document in the repository, and also sent by mail to people expecting it. The chain is launched using the Nuxeo scheduler. Every Sunday it sends an event in the Nuxeo Platform event bus, where an event handler listens to it and calls my CSV export chain. Then a Python script on the data warehouse server downloads the file that was loaded in the Nuxeo document. Once it has been updated, the script imports the content in the PostgreSQL database.

It is interesting to explain how the export chain works, so as to review two notions:

  • How to produce a CSV file using automation,
  • How the use of subchains can make your configuration easier to maintain and clearer to read.

Generating a CSV file with Automation

This relies on the use of the “Render document feed” operation, available in the Conversion category. This operation takes a list of documents as input, and produces a blob as output. It takes as a parameter an FTL (or MVEL) template name, and inject the documents collection in rendering context before rendering it. The collection is available under the “This” object. You can thus use the loop directive of FreeMarker to produce a CSV in your template:

title, description, creation date
<#list This as doc>
${doc.title},${doc.description},${doc.dc.created}
</#list>

Let the previous sample be a template named “csv_file_generation”. The following chain:

Fetch > Query (query: select * from client)

Conversion > Render documents feed (template name: csv_file_generation, mime type: application/csv)

will produce a CSV file (you can append a “UI > Download” operation and bind it to a user action for testing it). More documentation on the topic can be found on our wiki pages.

That was a simple situation. In my case, I have several objects that are linked via document ids stored in properties, and I want to produce one single CSV file with columns of values of all the objects that are interesting for one customer. That means that instead of using Fetch > Query and iterate on the result, I need to build my collection differently.

Controlling your Chain Execution with Operations from the Execution Flow Category

I choAutomation chainse the following strategy:

  •  Initialize a collection of maps, that will contain for one line of the required CSV one map computed in a subchain, using Execution Context > Set Variable (value: @{new java.util.HashMap()})
  • Select all the customers using Fetch > Query
  • For each customer call a “FetchCustomerMap” using Execution Flow > Run Document Chain, that will execute the subchain repetitively for each document.
  • Call the Render Document operation for producing the final CSV like we’ve seen earlier. As the collection of maps (that has been filled in the Run Document Chain step) is in the context of the automation chain, it is also available in the template, and that’s what I’ll use for looping and producing the CSV
  • Store the generated blob in a document of the repository, using Document >Set  Blob
  • Send that file via Email using Notification > Send Email operation

We can now zoom on the FetchCustomerMap chain that is executed for each customer. Its role is to create one line of our CSV file. Since data is spread among different objects, we use some run chain operations to isolate the process for each object, so as to make configuration clearer and easier to maintain:

  • Creates a map object using Execution Context > Set Context variable
  • run a subchain to fetch contract data and set it in the map, using Execution flow > Run chain
  • run a subchain to fetch studio project data and set it in the map, using Execution flow > Run chain
  • run a subchain to fetch application data and set it in the map, using Execution flow > Run chain
  • Add the map to the collection object that was created in the parent chain.

You may have to control the transaction time-out of your sever if processing takes to much time, or run the loop in separate transaction at each step of the loop. That ‘s all for today!

The post Export Data with Content Automation appeared first on Nuxeo Blogs.

[Q&A Friday] How to Write a JSF Validator for a Nuxeo Studio Widget Field

$
0
0

How to Write a Validator for a Nuxeo Studio Widget Field

How to Write a Validator for a Nuxeo Studio Widget Field

Here’s a question from zod who asks how to write a custom validator method.

The first thing to know before we proceed further is that a Nuxeo Studio widget is rendered using JSF. So the question is how to write a JSF validator. And since we are using the SEAM Framework to leverage JSF, the complete question is how to write a JSF/SEAM validator.

Let’s take a simple example. I created a String field called email. I want to make sure that this field is filled correctly. For the purpose of this blog I will only check if the email address has an ‘@‘ character.

Let’s write a Java class that has a validation method and is a SEAM component:

package org.nuxeo.sample;

import java.io.Serializable;
import java.util.Map;

import javax.faces.application.FacesMessage;
import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;
import javax.faces.validator.ValidatorException;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.jboss.seam.ScopeType;
import org.jboss.seam.annotations.In;
import org.jboss.seam.annotations.Name;
import org.jboss.seam.annotations.Scope;

@Name("businessValidatorSample")
@Scope(ScopeType.STATELESS)
public class BusinessValidatorSampleBean implements Serializable {

	private static final long serialVersionUID = 1L;

	private static final Log log = LogFactory
			.getLog(BusinessValidatorSampleBean.class);

	@In(create = true)
	protected Map<String, String> messages;

	public void validateEmail(FacesContext context, UIComponent component,
			Object value) {
		String email = (String) value;
		if (email.contains("@")) {
			// ok
			return;
		}
		String msg = messages.get("error.notifManager.noUserSelected");
		FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_ERROR,
				msg, msg);
		throw new ValidatorException(message);
	}

}

As you can see, the SEAM component is named businessValidatorSample and the validation method, validateEmail. This means that in Nuxeo Studio, during the widget configuration, we’ll have to write #{businessValidatorSample.validateEmail} in the Validator field.

Validator Sample

As for the code, it’s a really simple. Just cast the parameter value as a String and verify if it contains the ‘@‘ character. Know that any method you use as a validator must have the following signature: public void methodName(FacesContext context, UIComponent component, Object value)

And the method must throw a new ValidatorException if the value is not as intended.

I might do a marketplace package with a bunch of pre-configured validators. Let me know if you would find it useful and if so what validators you would like to find in it.

The post [Q&A Friday] How to Write a JSF Validator for a Nuxeo Studio Widget Field appeared first on Nuxeo Blogs.


How to Sign an Application for Windows and OS X

$
0
0

If you want to deliver a desktop application for Windows and/or OS X at some point you will need to get interested in code signing. Windows and OS X have some default security policies to prevent users from running software downloaded off the Internet if it has not been signed, so binary packages need to be signed!

For an unsigned application, under Windows, users only need to click “Yes” in a number of popups to get through the security check, which they are probably used to…

Yet under Mac OS X, unless the Security & Privacy settings are changed to allow applications downloaded from Anywhere (instead of Mac App Store and identified developers only) or they right / Ctrl click on the file, users simply won’t be able to launch the application! Apple fans will probably say this is a sensible way for Apple to control software quality. A valid certificate indeed shows that your software hasn’t been altered or corrupted and, if it turns out to be malware, Apple can revoke your certificate. Though one can also see it as a way for Apple to control Mac developers even more than it already does, while simultaneously extorting $99 per year from each and every one of them.

In any case, this could be a serious obstacle for Mac OS X users, so if you are shipping software for the Mac, you really need to sign it.

We’ve spent quite some time to understand code signing and figure out how to implement it for both operating systems in an automated way so that our continuous integration platform could handle it for the Nuxeo Drive application. We’ve tried to summarize the process in a step-by-step guide on how to sign Nuxeo Drive as a Mac OS X and Windows application.

Let’s first have a look at the various warning or blocking popups you might have when installing an unsigned application.

Installing an Unsigned Application Under Windows

These popups are only warnings, but the “Unknown” aspect might be scary for some users.

Warning popup when opening the Nuxeo Drive .msi file
Warning popup when opening the Nuxeo Drive .msi file
Warning popup at the end of Nuxeo Drive installation
Warning popup at the end of Nuxeo Drive installation

Opening an Unsigned Application Under Mac OS X

This popup is blocking.

Blocking popup when opening the Nuxeo Drive application
Blocking popup when opening the Nuxeo Drive application

Now let’s have a look at the various warning popups you should have when installing a signed application.

Installing a Signed Application Under Windows

Warning popup when opening the Nuxeo Drive .msi file
Warning popup when opening the Nuxeo Drive .msi file

If you click on the Nuxeo link you can have the details of the code signing certificate, as in the screenshot below:

Nuxeo certificate details
Nuxeo certificate details
Warning popup at the end of Nuxeo Drive installation
Warning popup at the end of Nuxeo Drive installation

Opening a Signed Application Under Mac OS X

Warning popup when opening the Nuxeo Drive application
Warning popup when opening the Nuxeo Drive application

Code Signing Overview

Though there are several ways to sign an application, let’s have a look at the main principles.

Windows

Obtain a signing identity

You first need to get a signing identity delivered by a trusted certification authority like Comodo or VeriSign. Such a signing identity is generally made up of a certificate and a private key. The simplest is to create a PFX file from the certificate and private key using openssl under Linux (yes, you will always need a Linux box at some point – at least we didn’t find a better way…). Copy the PFX file to the Windows build machine as it will be directly used to sign the code.

Sign the code

Use the SignTool tool provided by the Windows SDK to sign your application.

signtool sign /v /f "<certificate_path>\certificate.pfx" /d "Nuxeo Drive" /t http://timestamp.verisign.com/scripts/timstamp.dll nuxeo-drive-1.3.0204-win32.msi

  • /v Verbose
  • /f PFX certificate file path. If the file is protected by a password, use the /p option to specify the password
  • /d Signed content description, used as the msi program name
  • /t URL of the timestamp server

Verify the code

signtool verify /v /pa nuxeo-drive-1.3.0204-win32.msi

Mac OS X

Obtain a signing identity

You first need to get a Developer ID account from Apple ($99 / year). Then generate a Certificate Signing Request (.csr) for Code Signing Certificates using openssl to get a Developer ID Application certificate from the Apple Developer Center. Finally import the certificate and private key generated along with the .csr into one of the keychains of your Mac OS X build machine.

Sign the code

Use the codesign command line tool to sign your application.

codesign -s <identity> <code-path> -v

  • The <identity> can be named with any (case sensitive) substring of the certificate’s common name attribute, as long as the substring is unique throughout your keychains
  • The <code-path> value may be a bundle folder or a specific code binary, for example Nuxeo\ Drive.app
  • -v option is for verbose

Verify the code

codesign -vv Nuxeo\ Drive.app

This checks that the code is actually signed, that the signature is valid, that all the sealed components are unaltered, and that the whole thing passes some basic consistency checks.

Getting more information about code signature

To display all details about the code signature such as the hash type, signature size or signing authority, use the following command:

codesign -d -vvv Nuxeo\ Drive.app

Test code signing using the spctl tool

spctl --assess --type execute Nuxeo\ Drive.app --verbose

If your application or package signature is valid, this tools exits silently with an exit status of 0. If the signature is invalid, this tool prints an error message and exits with a nonzero exit status.

In case of success this should output something like:

nuxeo-drive/dist/Nuxeo Drive.app: accepted

source=Developer ID

That’s it, happy code signing!

The post How to Sign an Application for Windows and OS X appeared first on Nuxeo Blogs.

How to Manage Dates in Automation Chains

$
0
0

Calendar
Here’s a question I actually ask myself quite often. Managing dates with Studio is not particularly intuitive. Part of the reason is that, to make things simple, you don’t always manipulate the same objects in the same context. The other part is that the Date APIs in Java are simply not easy to use. To learn more about this, you really have to read the documentation Bertrand wrote about expression and scripting language.

This post is focused on Date handling in automation chains. When you’re building an automation chain, the scripting language used is MVEL. When writing MVEL script, you have access to most of the regular Java Objects. Take a look at the Type Literals documentation for more details. The next step to better understand dates in automation chains is to know which kind of Java objects you’re dealing with. Every date retrieved from a document property will be represented as a GregorianCalendar.

Assign a Date from a Document Property

So what can you do with dates? One thing to do easily is set up an expire date 10 days following the creation date of the document. My first naive implementation looked like this:

Fetch > Context Document(s)
Document > Update Property
  value: @{Document["dc:created"].add(java.util.Calendar.DAY_OF_MONTH, 10)}
  xpath: dc:expired
  save: true

This automation chain runs at the creation of a document. What’s going to happen is that the dc:expired will be null and 10 days will be added to the creation date. This is far from what I wanted to achieve :-)

The dc:expired property is null because the add method return nothing, which means that the value field will be ‘nothing’. And the dc:created property is incremented because that’s exactly what @{Document["dc:created"].add(5,10)} does. Here the property dc:created is directly modified.

So let’s fix this with the clone method. Cloning Document["dc:created"] creates a new object. This way if you modify the clone version it obviously won’t modify the original object.

Fetch > Context Document(s)
Scripting > Run Script
  script:
    Context["expirationDate"]=Document["dc:created"].clone();
    Context["expirationDate"].add(java.util.Calendar.DAY_OF_MONTH, 10);
Document > Update Property
  value: @{Context["expirationDate"]}
  xpath: dc:expired
  save: true

Format a Date as a String

Another common thing to do is take a date and format it as a nice String. Usually in Java you would use the SimpleDateFormat class. It’s format method takes a Date object as parameter. To get a Date object from a Calendar Date, you can call the getTime method. To set an existing date as a string in the context, the automation chain would look like this:

Fetch > Context Document(s)
Execution Context > Set Context Variable
  name: MyVariableName
  value: @{new java.text.SimpleDateFormat("MM-dd-yyyy").format(Document["dc:created"].getTime())}

Here I assign to the context variable MyVariableName the creation date formated like 01-20-2014.

The CurrentDate Object

When you use the CurrentDate object in an Automation Chain, you’re actually using another Java Object called DateWrapper. Its goal is to ease the use of dates in MVEL. You can also get a DateWrapper using the provided functions in the scripting assistant:

wrappedDate = @{Fn.date(Document["dc:created"].getTime())}
wrappedDate = @{Fn.calendar(Document["dc:created"])}

Once you have a DateWrapper object, it’s really easy to get the different part of a date (Day, Hour, Minute, Month, Second, Week, Year, Time etc…), to format the date as a String, to add a number of seconds, minutes, hours, days, week, months or years to the date or to convert it to a Timestamp usable in an NXQL query.

If we take the expire date example from before, adding the 10 days is easier, more readable:

Fetch > Context Document(s)
Scripting > Run Script
  script:
    wrappedDate = Fn.calendar(Document["dc:created"]).days(10);
    Context["expirationDate"]= wrappedDate.getCalendar();
Document > Update Property
  value: @{Context["expirationDate"]}
  xpath: dc:expired
  save: true

You don’t have to call the clone method as it’s pretty much what happens when the DateWrapper object is created. You still have to call the getCalendar method to retrieve the GregorianCalendar instance we need to set the property. In the future we’ll try to make this automatic so it becomes even easier.

In the same idea, the text formatting becomes simpler. Just compare this:

@{new java.text.SimpleDateFormat("MM-dd-yyyy").format(Document["dc:created"].getTime())}

to this:

@{Fn.calendar(Document["dc:created"]).format("MM-dd-yyyy")}

Let me know in the comments if you have any questions, or if you want to see more posts like this.

The post How to Manage Dates in Automation Chains appeared first on Nuxeo Blogs.

CoreOS Monitoring with Diamond and Graphite

$
0
0

CoreOS Monitoring
Our development team is hard at work on nuxeo.io. The architecture is getting clearer as we try many different technologies revolving around Docker. One of the Linux distributions that stands out to host our containers is CoreOS. As we’re serious about this, we started working on CoreOS monitoring.

CoreOS is

Linux for Massive Server Deployments.

This is the Linux distribution we use to host our docker containers for Nuxeo.io. The goal of CoreOS is to offer a minimal, HA Linux distribution with Docker. It’s also set up for clustering by default. That’s a perfect fit when building a PaaS. And something mandatory when you’re building a PAAS the right way is monitoring.

CoreOS Monitoring in Theory

We use different things to take care of our CoreOS monitoring. If you want a deeper overview of our monitoring stack you should read the post Mathieu wrote about it. It’s based on Diamond and Graphite for metrics collection, and it’s based on LogStash, ElasticSearch and Kibana for logs management.

I’ve been looking for various ways of doing CoreOS monitoring as it does not come with any collector as far as I can see. You have to know that CoreOS has no package manager whatsoever. If you want to install new things on CoreOS you have to rebuild them using their SDK. So it’s not easy to install Python based Diamond. But, it’s made to run Docker containers, so the approach I used was to run Diamond in a container on CoreOS. For this to work I needed to make sure Diamond had access to the /proc partition as this is where it collects most of its metrics.

Accessing the host filesystem from a container is easy thanks to the volume option. To access /proc from my container, I can run it like this:

sudo docker run -t -i -v /proc:/var/host_proc:ro ubuntu bash

Here -v /proc:/var/host_proc:ro corresponds to [host_filesystem]:[container_filesytem]:read-only. This is the Volume option that gives me access to my host’s /proc.

The next step is to tell the different Diamond collectors that they should look for metrics in /host_proc instead of /proc. Unfortunately, most of these collectors have the path hard-coded, so for the moment I forked it and hard-coded /host_proc (yes I know, but I was eager to test it). Now that I know it works I will try to parameterize this and send a pull request.

Get Practical

In the meantime the source code is on Github. You’ll find different docker images for Nuxeo, Diamond and Graphite. If you want to test it, I suggest you do it with CoreOS. You can have it running in no time thanks to Vagrant. Just checkout https://github.com/coreos/coreos-vagrant and follow the Readme instructions.

Once you have a CoreOS session open, you can check out the monitoring images and start building them:

cd nuxeobase
docker build -t nuxeo/nuxeobase .
cd ../nuxeo
docker build -t nuxeo/nuxeo .
cd ../graphite
docker build -t nuxeo/graphite .
cd ../diamondBuild
docker build -t nuxeo/diamond .

Then you can start your containers.

Start the graphite container:

docker run -h="graphiteServer" -p 8080:8080 -p 2030:2030 -p 2040:2040 -P -d nuxeo/graphite

Start the nuxeo container:

docker run -h="nuxeoServer" -p 80:80 -d -name nuxeoServer nuxeo/nuxeo

Start the diamond container:

docker run -h="diamondCollector" -d -v /proc:/host_proc:ro -link nuxeoServer:nuxeo -name collector nuxeo/diamond

Now with this particular setup you should have your graphite instance available on port 8080 of your host and Nuxeo available on port 80. If you take a look at your graphite instance, you should see Metrics getting stored in Graphite. That’s exactly what I was looking for.

CoreOS Monitoring

Next step for me will be to use logStash to forward logs to ElasticSearch and browse them through Kibana.

The post CoreOS Monitoring with Diamond and Graphite appeared first on Nuxeo Blogs.

[Q&A Thursday] Configuring Automatic Video Conversions on The Nuxeo Platform

$
0
0
Configuring Automatic Video Conversions on The Nuxeo Platform

Configuring Automatic Video Conversions on The Nuxeo Platform

Today we have a question from Hugues. He asks how to have automatic video conversion based on a pre-defined transcoding profile.

Right now when you upload a video to Nuxeo, it’s converted first to MP4 with a maximum height of 480, then to WebM with the same height. It’s automatic, you don’t have to click on anything. You just have to create a Video document. On the summary tab you will also see a button to convert the video to OGG format with a maximum height of 480.

This is the default behavior and as usual it’s configurable through extension points. If you go on the Nuxeo Platform Explorer and type ‘video’ in the filter, you’ll be left with two extension points: automaticVideoConversions and videoConversions.

First, let’s talk about videoConversions. It defines all the conversions available on the summary tab of a document. The default contribution looks like this:

  <extension point="videoConversions" target="org.nuxeo.ecm.platform.video.service.VideoService">

    <videoConversion converter="convertToMP4" height="480" name="MP4 480p"/>
    <videoConversion converter="convertToWebM" height="480" name="WebM 480p"/>
    <videoConversion converter="convertToOgg" height="480" name="Ogg 480p"/>

  </extension>

As you can see we have the same conversions I described earlier. The converter property refers to the Nuxeo converters defined with the appropriate extension point. The height property is the maximum height of the video. Don’t worry, it will keep the original video ratio. The name property is used as the identifier for the videoConversion and will be used as the label in the summary tab of the video.

Of course the following extension point called automaticVideoConversions is here to define which one will be launched automatically. It uses the same name as the previous extension point.

  <extension point="automaticVideoConversions" target="org.nuxeo.ecm.platform.video.service.VideoService">

    <automaticVideoConversion name="MP4 480p" order="0"/>
    <automaticVideoConversion name="WebM 480p" order="10"/>

  </extension>

Now we can do a simple example and see what contribution we should write if we wanted to stop the automatic MP4 conversion, remove the OGG option and add a new one. Let’s say we want a conversion to MP4 with a maximum height of 240 and make it automatic. Here’s what we could do:

  <extension point="videoConversions" target="org.nuxeo.ecm.platform.video.service.VideoService">

    <videoConversion converter="convertToMP4" height="240" name="MP4 240p"/>
    <videoConversion enabled="false" converter="convertToOgg" height="480" name="Ogg 480p"/>

  </extension>
  <extension point="automaticVideoConversions" target="org.nuxeo.ecm.platform.video.service.VideoService">

    <automaticVideoConversion enabled="false" name="MP4 480p" order="0"/>
    <automaticVideoConversion name="MP4 240p" order="0"/>

  </extension>

To set this up with Nuxeo Studio, you have to go into the Advanced Settings menu, create new XML Extensions and paste the above code. Then reload your project as you usually do.

Once you’ve done that, you won’t see the disabled converted video anymore. Which means that if you had previously converted a video with the OGG 480p, you won’t see it in the summary tab. But don’t worry, they are still on the document. You can access them like any other property.

The MP4 240 conversion will have to be triggered manually on existing documents. This is because the automatic conversions are launched when the document is created or modified.

The post [Q&A Thursday] Configuring Automatic Video Conversions on The Nuxeo Platform appeared first on Nuxeo Blogs.

A Content App Using Mustache, Bootstrap and nuxeo.js

$
0
0

One of the cornerstone events of the last Fast Track release (Fast Track 5.9.2) was the work of Thomas on the new JavaScript client. If you are a web developer, a top-notch new gen JavaScript killer, or simply someone who’s not into the JEE stack, you will love this client. It provides all the features of the Nuxeo Platform in an HTML page.

Nuxeo.js is a library available on GitHub that you can import inside your page to wrap Nuxeo Platform API calls. Using nuxeo.js, you can easily get a file, upload one, transform content to PDF, or from a jpg to a png, or even annotate pictures. You can perform full text search, create a version of a document, and much more!

Nuxeo.js is currently provided in two flavors: one that depends on jQuery, that you can include in your HTML pages, and one that you can “require” in a node.js application.

In this blog post, I will focus on the jQuery implementation. I have experimented with how we can leverage this nuxeo.js client, with the goal of having minimal technical set up requirements, in comparison to working with big JavaScript frameworks like Angular.js, Ember.js and so on. These are also a very interesting approaches (some projects have already started on Angular.js), but not what I talk about here.

The Functional Goal

I am implementing a small HTML page and some js scripts to display our customer references information, stored in our Nuxeo DM intranet. The goal is to help our sales teams share knowledge of their customers. Although very interesting, I won’t get into the business details today, this could be a later post. Let’s limit the context to the fact that there is a “Customer” document type that contains properties such as a description of the business of the customer, of the project, the time to go live, the level of customization, the main modules used, the competitors during the sales phase, etc…

My webpage is quite simple: it displays a list of all references, allows me to browse each of them, display the properties and double-click on a field to modify its value. I can also drop a picture in the screenshot field. Note that as my goal was primarily to explore various topics around using the js client, I haven’t yet implemented the complete user story, or the CSS design. This is more like a lab. Here is a one minute video that will make things more concrete for the rest of the post.

The Technical Set Up, the Development Flow and the Software Architecture

I have a folder with an index.html page, a library folder with the JavaScript dependencies, including nuxeo.js, and a script folder with my main script, customer.js. I also have a css folder for CSS, an image folder and a templates folder.

I develop using the no security mode of chrome: (On Mac
open /Applications/Google\ Chrome.app –args –disable-web-security) but could also have configured Nuxeo with CORS. I use Sublime Text as my text editor.

Once I am happy with the result, I zip my index.html file and its side folders and I drop it in Nuxeo. It is then available using the preview restlet: this is an often unknown “feature” of Nuxeo preview service — it unzips the zip to check if there is an index.html file, and then just serves the static website (you can right click in the preview popup to get the URL of the corresponding frame).

The Nuxeo client connection happens in the index.html on first load of the page. Then the customer.js file contains a set of functions for most of the user actions:

  • Performs the necessary requests to Nuxeo Platform server,
  • Renders data via templates using mustache.js, and
  • Updates the initial web page dom using jQuery, to inject the result of the templating phase.

I use Bootstrap for the HTML content of the templates and the main index.html page. I also added a JSON array that contains useful definition data of all the form fields, so as to facilitate maintenance by centralizing the information and maximizing generic code. No doubt that in the future, this definition will be in Studio!  ;-)

Where to Find nuxeo.js, the Nuxeo Platform JavaScript Client

Nuxeo.js is available on GitHub. It is currently in version 0.1, the 1.0 version is targeted for the next LTS. You just need to include it as a library on your page to start using it:

<script src="lib/nuxeo.js"></script>

I recommend you to read the available tests suites which are good practical documentation to understand how to use the nuxeo.js library. It uses the node.js implementation, but nothing changes in the syntax and objects used.

First  Steps Using nuxeo.js

As I said, my web page first displays a list of customers. To start with, the client needs to be instantiated once in the HTML page:

var connectInfo={
baseURL:"http://localhost:8080/nuxeo"
}
var client = new nuxeo.Client(connectInfo);

Here I don’t need to pass credentials as I deploy the pages on the Nuxeo Platform (see the technical set up section of this post above) where users are already authenticated. But I could have added in the connectInfo object username and password (and soon we will add token based authentication schemes). Then I set the schemas (document properties) I want to fetch for all the documents I will get in my coming requests:

client.schemas(["dublincore",
"nuxeo_sales_info","nuxeo_customer_identification"]);

I am ready to use the client to start the dialog with the Nuxeo server! I wrap in the browseListOfCustomers() function the query to fetch the Customer objects:

client.operation("Document.Query").params({
query: "select * from Customer where ecm:currentLifeCycleState != 'deleted'"})
.execute(function(error, data) {
// In the callback function you implement what you want to do with the server response, once you received
// it. The "data" object is a JSON from our REST API.
//You can use console.log(data) for introspecting it,
//or browse nuxeo/api/v1/doc on a Nuxeo server for more details.
});

Another example fetches the vocabulary values. I wrap the call in a function as it is used multiple times:

function getVocabularyData(directoryName, callback){
client.operation("Directory.Entries")
.params({"directoryName":directoryName})
.execute(function(error,data){
callback(data)});
}

Getting a document once you have its id is also very simple:

client.document(customerId).fetch(function(error, data) {//callback
});

But my favorite one is definitely the file upload, that wraps the batch upload API in an elegant way. Here is the sequence, where file is a javascript file object:


importOp = client.operation('Blob.Attach').params({
 document : currentDocId,
 save : 'true',
 xpath : 'npi:screenshot1'
 })
 importOp.uploader().uploadFile(file,null);

//if the operation I called accepted multiple blobs, I could have had multiple uploadFile calls here, before calling the execute method

importOp.uploader().execute(function(error, data){});

Neat right ?

Mustache: the Templating Framework, with JQuery Mustache plugin

Just before working on this example, I had quickly worked on the implementation of a roadmap vizualizer website (wait for next week’s blog to learn more!) where I built the HTML in the middle of my JavaScript functions, with string concatenations. Escaping all the html was a real nightmare, and it is difficult to maintain – you just don’t want to go back to it once you’ve finished. So this time, even if my initial goal was to stay simple, I wanted to deal with this problem. I had a look at several JavaScript templating frameworks, and Mustache seemed to me well documented and one of the most used. The principle is to provide a string that is the template and to have variables replaced in it (the following example is from Mustache documentation):

var view = {
  title: "Joe",
  calc: function () {
    return 2 + 4;
  }
};
var output = Mustache.render("{{title}} spends {{calc}}", view);

Having less concatenations to do already makes things cleaner, but you still have to handle all your template strings in some way, and if you leave them in JavaScript vars, you still have to handle the escaping. That’s where adding the JQuery Mustache plugin makes things magical: the template strings are stored in html files that the plugin makes easy to load and then reference in rendering executions.

In a template file, let’s say the customer.html, I have a series of <script> instructions that contains the template strings:

<script id="CustomerView">
My Template string {{value1}}
</script>
<script id="my2ndTemplateName">Another templatized string {{.}}</script>

Then, if I want to use a template string, let’s say my “CustomerView”, here is the script flow, where doc is the document backing the Customer data:

$.Mustache.load('./templates/customer.html').done(function(){
var content = $.Mustache.render('CustomerView, doc);
$('body').html(content);

What is interesting here is that I can easily organize my templates in separate HTML files where the graphic designer won’t be lost if he has to tune a few things! See listOfCustomers.html for a simple example, or the customer.html template to have a more complete one (with multiple templates inside one template file). One problem I had using Mustache.js was that templates were cached and cache was never cleared, so I could not see my modifications without clearing the whole browser cache. Thanks to this blogpost, I found a workaround playing with Chrome javascript console settings.

In the end, I have a template for the list of customers, one for viewing a customer, and then some small ones for switching a field from view mode to edit mode: on a dblclick event, I call switchToEdit(propertyId,value) that re-renders the zone with a template that contains the edit field (a select, a textarea, …). On the rendered edit block, I call on the onblur event, the switchRead function, that re-renders the view-only block.

Summary of Used Libraries

Finally, my script include sections contains:


<script src="lib/jquery-1.10.1.min.js"></script>
<script src="lib/mustache.js"></script>
<script src="lib/jquery.mustache.js"></script>
<script src="lib/nuxeo.js"></script>
<script src="lib/bootstrap.min.js"></script>
<script src="lib/dropzone.js"></script>

<script src="js/customer.js"></script>

Aside from customer.js, that’s all what you need to start!

More to Talk About, More to Implement!

Following this post I will continue to cover this example. I plan to:

  • Play with dropzone.js for uploading files in Nuxeo. For curious people, there is already an intitial integration in the code base, but there are more interesting things to do around it.
  • Describe a way of leveraging blobs in the html page that come from automation requests (pdf transformations, data merging with office files, image resizing…). A small improvement will be added for this in an upcoming Fast Track.
  • See the various options for authentication. (Actually we need to wait for more work to be achieved on the client for that one).

Looking forward to hearing about your first use cases!

The post A Content App Using Mustache, Bootstrap and nuxeo.js appeared first on Nuxeo Blogs.

Viewing all 161 articles
Browse latest View live