Tag Archive for: REST API

In September, we announced our latest collaboration with Portland State University’s Computer Science Capstone program, Test Runner for Jama Software, which is a native iOS application.

We’re pleased to say that Test Runner is now available for download in the Apple App Store. Test Runner is for use with all Jama Software customers with hosted instances.

Test Runner Features

The PSU students worked tirelessly for six months on the app, building out its functionality with help and guidance from Jama’s professional services team.

Test Runner was designed to mimic the workflow of Jama’s very own Test Center, allowing users to intuitively navigate through the app and quickly run through all assignments on the go. This gives current customers all the benefits of Jama’s test management abilities, with a greater degree of flexibility in the process.

Here’s a quick walkthrough of Test Runner, and some of the actions that can be completed while using it:

  1. First, a user must have access to a hosted instance of Jama to use the app.
  2. If that’s the case, a user is manually assigned a handful of tests from within Jama.
  3. When the user is ready to test, they simply login using their username, password, and instance name (for example: https://test-instance.jamacloud.com would have the instance name “test-instance”)
  4. Once logged in, users will be prompted to select the project they wish to test.
  5. With a project selected, the user will step through an accordion-nested structure of test plans, cycles, and runs. All test plans within a project are displayed. Within each test plan, all available cycles will be listed. Each cycle contains all assigned test runs to the user.
  6. When a test run is selected, users will be transitioned to the test screen, allowing them to select one step at a time. A test step can fail or pass during the test. Additional notes can be taken for a given test step.
  7.  An image can be uploaded for the overall test run, as well as any general notes the user may wish to include.
  8. Once a test has been submitted from the app, it will no longer appear in the selection list. Only “Not-Run” test runs will be visible from the app.

Looking Ahead

The PSU capstone students who build Test Runner truly went above and beyond for our customers to deliver this simple, useful, and exciting app. And there’s more potential to come.

Since Test Runner will remain an open source project, users can actively contribute by submitting updated functionality or issues to the team’s repo on GitHub. Users are also encouraged to leave reviews on the Apple App Store.

Download Test Runner Now

 

Pictured from left to right: Lauren Cooper, Will Huiras, Jason Ritz, Ben Lawrence, Devan Cakebread, Meghan McBee, and David Wagg.

Jama Software is committed to our mission of helping customers bring their innovations to market faster, and with our reliable REST API we are doing just that. From custom data integrations to ETL tools, REST has proven to be an invaluable asset utilized by many of our customers.

REST has also lead to some innovations for Jama. In 2016, we worked with students from Portland State University (PSU) in the Capstone Program to develop a trace visualization tool: OverView for Jama.

Today, we are thrilled to announce our latest collaboration with this year’s PSU Capstone students, who developed the very first iOS mobile application for Jama, Test Runner. The application, which has been submitted to Apple’s App Store and is currently under review, will allow users to view and execute tests that have been assigned to them on the go.

Motivated Team

This year’s Capstone team was made up of seven talented students — Lauren Cooper, Will Huiras, Jason Ritz, Ben Lawrence, Devan Cakebread, Meghan McBee, and David Wagg — who are in their final year of computer science studies.

They were eager to be sponsored by Jama because “the project had a clearly-defined scope and purpose,” said team member Lauren Cooper.

Over several months, the students dove into pair programming and other Agile practices to plan, build, and test their iOS app.

“Initially, we had a steep learning curve — we never wrote an iOS app before and were very new to Swift programming language,” said team member David Wagg.

The students worked closely with Jama’s professional services and UX teams to build the Test Runner app.

Amazing App

When asked what they learned about users of the Test Runner app, Lauren Cooper said, “We have new, profound respect for product testers.”

During their product demo of Test Runner for Jama staff, the team called out how refreshing it was to use an API that was well-documented, straightforward, and responsive. You can view the team’s open-source project now on GitHub.

We loved seeing the results of this project and hope the PSU team’s work inspires our customers as much as it has us. A huge thanks to the entire team and PSU!

Note: This post will be updated with a link to the Test Runner application on the Apple App Store when it becomes available. For now, users can clone the team’s repository on GitHub, and install the application onto their iOS devices via Xcode.

If you look at wildly successful technology companies, there’s a common theme: they manage complex and increasingly connected data. LinkedIn connects you to a dynamic professional network, Facebook to existing and new friends, Amazon to your next purchase, Google Searches to disparate parts of the internet. If structured connected information powers networking, socializing, and purchasing your favorite shampoo brand on subscription, isn’t it time an equally powerful experience is available for the daily challenges of product development?

Jama Software, too, is fundamentally a system designed to hold deeply connected data. That’s by capturing and managing the many-to-many relationships between strategic data, market and technical requirements, customer details, test result impacts — anything needed to manage a complex product through development to delivery.

Unfortunately, not every Jama user could take advantage of this product data network the way anyone can a web search or social networking tool. Why not? The data was there, but the interfaces to actually *see* it were, well, not very visible. With the latest release 8.14, that’s changing. We have a renewed focus on using that data to help every user who interacts with Jama. Across our toolset we’re finding ways to support, enhance, and visualize the connectivity.

Here’s a tour of what’s new in Jama Software 8.14! The network of product becomes more visible, actionable, and the foundation for better conversations for all users. 

For those building products, there are scenarios where you simply need to see what’s going on across the system, make decisions, and trigger actions outside of a tool in the real work. Connected data, or traced data as we call it inside Jama Software, likewise comes in a few flavors so it’s useful for various purposes.

1. Trace View: No matter where you are in the Trace View, the data displayed needs abundant context and clear ties to the rest of the work in your Jama system. Whether you’re exploring how your market requirements have been decomposed, or are anticipating a change to a requirement based on test results, the Trace View is designed to help users make meaning out of the noise. When an item is selected, the Trace View highlights that item and all of its lineage.

The item CL3-MR-2 is selected, describing a demographic requirement. To the right in light blue (downstream) see how this high level demographic requirement decomposes into system requirements

The item CL3-MR-2 is selected, describing a demographic requirement. To the right in light blue (downstream) see how this high level demographic requirement decomposes into system requirements

Keep exploring in the Trace View to see how the system requirements choices ripple throughout the system to development, validation, and testing tasks.

Keep exploring in the Trace View to see how the system requirements choices ripple throughout the system to development, validation, and testing tasks.

Want to know what something is? Get a preview of any item while exploring the thread of a question through the system. See data results before leaving your page, similar to many search engines.

Want to know what something is? Get a preview of any item while exploring the thread of a question through the system. See data results before leaving your page, similar to many search engines.

2. Move and Undo: Once you find the data set you care about, work doesn’t stop. You’re looking for change history, asking for clarification via item comments, adding your own content. Working quickly it can be easy to accidentally move content and lose it. With enhanced, visible, user notifications when moves occur it’s now more obvious what changes are being made. Undo is also now one click away, a powerful tool for both power users making bulk actions and data explorers alike who should have no fear that they’ll “mess it up” by traversing the decision history of their own product.

Move and Undo

3. Text Editor: Once conversations leaves small, tight-knit teams sharing language shortcuts, external legibility depend on having the ability to accurately represent your information. Among dozens of updates added to improve data entry, we were especially keen to get more support for symbols needed by engineers so requirements translate across teams.

Improved Text Editor

4. REST API: Sometimes data connectivity extends beyond the bounds of any one application. Jama is making it easier to keep those connections alive, complete, and efficient. With partial item updates, and editing read-only fields (for users) with REST it is easier to cleanly and rapid integrate with outside applications. Similarly, test data is more visible and meaningful via REST with the addition of Test Run version info, sort order, and additional ways to search for test information.

What’s next? Try Jama 8.14 and from now forward you can get more value out of all that data being created every day while you build your products of the future. Use connected product data as a daily tool to enhance your decision making and conversation with others, just as you do for shopping and connecting socially. Involve more people while providing context they actually understand, anchored to their perspective and traced through to yours.

When it comes to performing major business functions, few software platforms can be described as do-it-all, standalone solutions. And for good reason: They may not want to be. According to independent research firm Gartner, “Although enterprise resource planning (ERP) vendors offer numerous enterprise applications and claim that their integrated system is a superior solution, all modules in an ERP system are rarely best-of-breed.”

In recent years, more enterprise software providers have adopted a best-of-breed and open ecosystem approach, as barriers to integrating tools and platforms have dissolved and the benefits of focusing innovation have become abundantly clear. Salesforce is an example of a company taking this approach and succeeding.

During the company’s rise to CRM leadership, there were undoubtedly moments in time where expanding the capabilities of the platform was tempting. But by choosing to focus on core competency and building an ecosystem of partners and app developers, Salesforce has been able to remain on top and enhance the value of the platform for its customers.

Salesforce gives growth-stage technology companies a good example to follow: By choosing the right partners and introducing strategic opportunities for integration, enterprise software providers can focus on meeting overall customer needs without needing to directly supply each element of the solution.

As the product development platform for software-driven, smart, connected products, our focus at Jama is helping our customers build the most complex products and systems in the world. Most customers come to Jama with existing processes and tools in place. The question for them is how the Jama platform complements and integrates with their more specialized solutions.

Ultimately, part of the reason they choose Jama is its flexibility and integration capabilities. Through our Jama Integration Hub, powered by Tasktop, we connect to the leading ALM developer tools including JIRA and TFS. In addition to our existing integrations, we’ve expanded our ecosystem over the past year with the introduction of our REST API. The move to REST was welcomed by our customers that have used the standards-based, easy-to-use API to develop customized traceability reports, record automated test results and more.

In another move to develop our ecosystem, we launched the Jama Alliance, partnering with many leading solutions providers within the product development space. To meet the needs of our systems engineering customers, we’re working with systems-based modeling tool providers like Intercax and No Magic.

And as we continue to expand our ecosystem, we’re exploring PLM, CAD and simulation tools and how we might work with them to build value for our customers. No one can predict the future, but there’s one thing I’ll bet on:  As companies race to build the next generation of smart, connected products, they must take a systems-approach, which requires the Jama platform to extend the reach of its integrations.

Our customers’ innovation and success drives us each day and we’re excited to continue our work of building a platform and ecosystem that will further enable that work. In the coming weeks and months, we’ll be rolling out new features and partnerships that build on our core competencies and help our customers simplify their complex product development. We’re excited about these advancements, but it’s the results—the so-called “impossible” products—that our customers will deliver that inspire me the most.

We recently hosted a series of Code School sessions within Jama aimed at teaching non-engineers how to program. I worked on two 90-minute classes that taught some of our Sales and Customer Service reps how to interact with the Jama API using Python. By the end of it, they had the skills to automate some simple tasks and were hungry to learn more.

The idea was brought up by our CPO, Eric Winquist, back in January. Eric has had a hand in coding some parts of our application back in Jama’s early days, so I like to believe he shares my view that coding is useful for all sorts of disciplines, not just engineering. It’s a useful tool in automating simple tasks and helping your critical thinking skills in everyday situations. I volunteered to teach the class, as I’ve had some experience with teaching technology to youth, and I always enjoy seeing that “ah-ha” moment that people have when they solve a difficult problem.

Many professions are seeing a rise in the amount of mundane tasks they need to automate in order to concentrate more on decision-making. GE’s CEO recently said that all of their new hires will be learning to code, whether they are in sales, finance or operations. It’s an important skill to have, even if you don’t find yourself using it all the time. One quote I put at the very beginning of our first class was straight from Steve Jobs:

“Everybody in this country should learn to program a computer, because it teaches you how to think”

Thinking differently was kind of a mantra to our classes. I also tried to point out that the computer isn’t just a platform for ready-made applications that you buy in the store. It can also be a swiss-army knife that can do your bidding – as long as you know how to use it. I feel like programming really brings out two qualities in most people: thinking outside of the box and putting that thought into practice. These are both essential for any skilled professionals, not just engineers.

We had about 25 people sign-up for the Code School initially and I got to working on something that would be accessible to any skill level. I also wanted to keep the material relevant to what we do at Jama, so I settled on teaching two sessions based on Python and the Jama API. The first session was an intro to programming with Python. Everything from variables to if/then/else statements, to while loops, and even to functions. I based a lot of material for this class on a book called Automate the Boring Stuff with Python.

The second session was based around our Jama REST API. I started the class talking about what HTTP protocol is, how it allows you to interact with the web, what a REST API is and how to understand it, and finally what our Jama API is and what it looks like in the documentation. We spent a good amount of time going over these topics and exploring Jama’s interactive REST API documentation. Then we settled down and started to piece together a working script in Python to update users in Jama through the API. I think this is where a lot of parts of both of the classes started to really come together and people could see more potential in how programming could be used in their daily jobs.

To help assist with getting through the difficult sections and with all of the random issues that typically get associated with programming, I enlisted a couple of Teaching Assistants (TA’s) to sit in the back of the class and jump in whenever someone was having trouble. This was possibly the biggest contributor to having a successful code school and I owe everything to Max Marchuk (Front-End Developer) and Nicholas Lawrence (Support Engineer) for helping out in this role. Often small technical issues and syntax errors can really bring a class like this to a crawl, but with the TA’s around I was able to pause for just a short amount of time to resolve an issue before moving on to the next topic.

I was maybe a little surprised by just *how well* everyone handled the material actually. Other than some bumps with the Python interpreter and syntax issues here and there, just about everyone in both of our classes understood the material well and asked many pointed questions. Most of the class was from our Customer Success and Sales departments, with 1 from Operations and a couple from QA in Engineering.

After the classes were over, I sent out a survey to gather feedback on how the class went. Everyone who answered not only said how much they enjoyed the classes, but agreed that they would like to continue learning more Python or some other language in the future. A little over half of the responses revealed they could see themselves using these skills in their day jobs, and everyone agreed that we should host these types of classes again for more people. Some of the replies also speak for themselves:

“Having TAs was a huge help”
“I had a really great time and feel like I better understand the work you guys do, even if we only scratched the very, very, very surface.”
“It would be great to see more types of coding examples, C++, Java Script, Java, etc, etc.”
“Code School 3 and 4!” (After our first two sessions)

I’m really grateful to Jama for not only giving me the opportunity to teach these classes, but for also for trying unique ways to boost our skill set as a company and to provide a unique bonding opportunity across different departments that isn’t just another happy hour. Learning how to code might provide a way to automate a simple task, but more importantly it helps people to think logically and programmatically.

I’m excited about this new library I ran into and I would like to share the excitement. It’s called Restito, it’s on GitHub, and I actually found it on a blog from 2012, so… where were you all this time?

Restito is a Java library to mock out REST APIs. It’s self-described as the complement of REST Assured. REST Assured is also a library used for testing REST APIs (like Restito), but it mimics the client (while Restito mimics the server), it has a fluent API (like Restito), and it’s very popular (no shame in aspiring for popularity). In fact, I see REST Assured as the gold standard for testing REST APIs. Restito, on the other hand, is new to me.

Restito

Problem

What problem are we trying to solve here? We are building a service that’s chatting with a remote instance of our Jama application, using Jama’s REST API.

Restito

Inside the service there is a whole chain of classes, that are dependent on each other. There is a class JamaProxy, which has some business level methods in the context of our Jama application. It calls into JamaRestClient, which understands how to form request URLs and payloads compliant with Jama’s REST API. It uses javax.ws.rs.client.WebTarget and friends to set up the actual connection with the remote Jama application.

Restito

Concerns are separated, and when it comes to unit testing, each concern can also be unit tested separately. IntegrationHandler has a corresponding IntegrationHandlerTest, which uses a mocked version of JamaProxy. The same for JamaProxy, which uses a mocked version of JamaRestClient. But how do we unit test JamaRestClient?

JamaRestClient is strongly tied to WebTarget (in fact, its only purpose in life is transformation into and out from WebTarget); it makes no sense to abstract WebTarget out of JamaRestClient. I have tried to mock WebTarget in unit tests before, and found it notoriously hard to mock (hard meaning annoying and ugly). The way a lot of its method calls are chained (fluent API, builder pattern) doesn’t lead to very pretty unit test code. Also, mocking out a request library is scary because there are so many ways you can end up with a mock that doesn’t behave like the real thing.

It would be more realistic if we used real communication to the real REST API. It would allow us to use the real request library. Restito lets us use real communication, without the need to set up a real Jama application to talk to. (If you are going to tell me that this is not real unit testing, please talk to the hand: we have narrowed the scope of the test to the smallest unit that makes sense, and we’re doing it in a way that increases confidence in our product.)

When it comes to integration testing, you might have the same problem. Depending on how you define integration testing, it may be undesirable to create real Jama application to communicate with. Restito can help in the same way as for unit testing. It’s not a full substitute for “the real thing”, so we likely desire a system test that involves a real Jama application, but Restito will help boost confidence in earlier phases of development.

Including Restito

Restito is available as a Maven artifact, so you add it to the POM file as follows:

<dependencies>  
...  
    <dependency>  
        <groupId>com.xebialabs.restito</groupId>  
        <artifactId>restito</artifactId>  
        <version>0.8.2</version>  
        <scope>test</scope>  
    </dependency>

Your POM would also have dependencies on the client library, for example javax.ws.rs-api for WebTarget, but that’s beside the point here.

How It Works

Restito sets up a little web server, on a random available port. It is then possible to set up expectations, before exercising the test subject. Then even verifications can be done to make sure that calls to the REST API actually happened, in the way expected. Here is some example code — some non-essential constants and methods, as well as imports omitted for brevity:

public class JamaRestClientTest {  
    private StubServer server = new StubServer();  
    private ObjectMapper objectMapper = new ObjectMapper();  
  
    @Before  
    public void setUp() {  
        server.run();  
    }  
  
    @Test  
    public void testGetItem() throws JsonProcessingException {  
        Item expected = someItem(SOME_ITEM_ID);  
  
        whenHttp(server)  
                .match(get(format("/contour/rest/v1/items/%d", SOME_ITEM_ID)))  
                .then(ok(), jsonContent(expected), contentType("application/json"));  
  
        JamaRestClient subject
            = new JamaRestClient(baseUrl(), MY_USER_NAME, MY_PASSWORD, null);  
        Item actual = subject.get("items", SOME_ITEM_ID, Item.class);  
  
        assertEquals(SOME_ITEM_ID, actual.getId());  
    }  
  
    private Action jsonContent(Object object) throws JsonProcessingException {  
        return stringContent(objectMapper.writeValueAsString(object));  
    }  
  
    private String baseUrl() {  
        return format("http://localhost:%d/contour", server.getPort());  
    }  
}

Some things to note in the above code fragment: I have also introduced some usage of Jackson’s ObjectMapper, so that I don’t need to deal with literal JSON here, but can just pass an object in. I’ve not seen support for that in Restito, and I find that unfortunate.

The code is fairly easy to read, thanks to the fluent API of Restito, and without the need to mock out all the nested objects returned by WebTarget’s fluent API. I find it interesting how that almost sounds at odds: using a nice fluent API for a mock because fluent APIs are so hard to mock.

An example of a verification that could be added to the above example would be like this:

verifyHttp(server)  
    .once(method(Method.GET),
        parameter("documentKey", "DOC-KEY-123"),
        not(withHeader("x-some-header")));

The above code fragment shows how we verify that the server actually received an HTTP GET request once, with a certain query parameter, and specifically without a header called x-some-header.

Conclusion

I observe that Restito does not have the richness and cleanness on its fluent API, as REST Assured does, which it seems to aspire to. But I’ve fallen in love with it nonetheless, because I can see the void it’s filling. I was able to create a working test with Restito in under 15 minutes from the moment I started Googling. (It actually was 14 minutes, I time myself.) I know I will be using it a lot in the future.

Jama rolled out our REST API earlier this year, and we couldn’t wait to see what our customers would do with it. We’ve also had opportunities to work with organizations on good ways to leverage the API, including an exciting and unique engagement with Portland State University (PSU) computer science students in their Capstone Program. We’ve had great experiences working with PSU. Jama participates in their PCEP internship program, several Jama team members are alumni of PSU’s computer science program, and we’re always thrilled to work with students, helping them get a glimpse into how we make software to solve complex, real-world problems.

REST API

The Capstone team had seven students in their final year of computer science studies. They worked with our professional services and UX teams to build a web application that demonstrates the capabilities of Jama’s REST API and chose to focus on visualizing relationship data in Jama. “The opportunity to work with a local software company was incredible,” said team member Ricky Valencia. “We were excited to work with the Jama application and get the chance to learn from the team here.”

REST APIOver several months, the team dived into pair programming and other Agile practices to plan, build, and test their data visualization application. “Initially, we had a steep learning curve – a few of us hadn’t built web apps or even written in JavaScript,” said team member Michael Hansen. “We learned a lot during this process, including how to better gauge the amount of time needed to complete an objective and a lot about the intricacies of testing.” During their product demo to Jama staff, the team called out how refreshing it was to use an API that is solid and well-documented.
REST API

REST APIYou can see their open-source project on the team’s GitHub. We loved seeing the results of this project and hope the team’s work inspires our customers as much as it has us.

REST API

Huge thanks to Michael Hansen, Ricardo Valencia, Chance Snow, Kathleen Tran, Marcus Week, Iman Bilal, Ruben Piatnitsky and Portland State University.

When I joined Jama as CEO earlier this year, I was excited to become part of a team that was passionate about our customers and solving their problems. The companies we get to work with are a major reason I wanted to join Jama to begin with — it’s an honor and a thrill to partner with them as they build products that will change their industries and the economy. I know I’m not alone in that enthusiasm: As I met individually with every employee during my first three months on the job, over and over again “our customers” was a top reason people cited for coming to work here.

Market Forces

Our customers span an array of critical industries — aerospace, financial and consulting services, medical devices, government, semiconductor, consumer electronics and automotive, to name a few. I’ve now had the privilege of meeting with dozens of them, and I’ve consistently heard them describe the following market forces in play:

The new generation of smart, connected products is increasing competition.

For the first time ever, when consumers buy something new, whether a phone, a thermostat or a car, they expect its capabilities to improve over time. They expect new features over the lifetime of the product, automatic fixes where there were previously recalls, and unprecedented options for customization. With each release of Jama, we’re rolling out new features and improvements that focus on enabling innovation for our customers. We invested in building our REST API to add more even customization and extend the functionality of our solution.

Increasing complexity and new regulation add new challenges.

Development cycles are more complicated than before, requiring close coordination of hardware and software teams, often using different tools and methodologies. Connected products introduce new security risks, often into industries that were previously immune to regulatory compliance. As software becomes an increasingly critical component of new cars, the automotive industry has responded with new compliance regulations such as ISO26262, and so have we. This year we achieved ISO 26262 fit-for-purpose certification by TÜV SÜD to give our customers confidence as they navigate the path to compliance in their product development process.

Systems development teams require a purpose-built product development platform and must take a continuous engineering approach to create products for the modern world.

ALM was built for software, PLM was built for hardware, but today’s product teams require a unified set of capabilities. Teams need contextual, ongoing collaboration and a single source of truth for their data and requirements. In June, we released Jama 8, kicking off a series of releases that will build on our core traceability and collaboration features. We’re also investing in our product ecosystem with the launch of our Partner Alliance Program, working with best-of-breed solution providers to better serve our customers.

At face value, these challenges are daunting. But we get to see our customers overcome them each day through disciplined, modern management of their development processes, which lets them better capitalize on industry trends. As they work to deliver the life- and economy-critical products that are going to change the way we live, we’re glad to be their partners and are eager to foster their success every step of the way.

In being a SaaS company, we are gradually chipping away at our good old monolith, turning pieces into micro-services that can scale horizontally, and that scale efficiently by use of multi-tenancy. A single micro-service can have multiple instances, and each instance serves a multitude of customers. Multi-tenancy has implications on application state, and a common pattern is for database tables to be shared across tenants, where each record links to a specific tenant. It also requires some lifting to make sure that the application always understands which tenant it is working for.

We are a Spring shop, and happy users of Spring Boot for our micro-services. We recently built the “Jama OAuth service”, which is an OAuth 2 compatible authorization server, that essentially issues access tokens to clients of our system (given their credentials). It implements OAuth’s so-called “client credentials” flow/grant type.

Spring Security

Jama OAuth service issues access tokens to clients of our system

The access tokens are used to protect some REST resources. It was a must have requirement that the Jama OAuth service would support multi-tenancy. It was a natural choice to look at Spring Security, specifically Spring Security OAuth. This library however, does not, out of the box, support multi-tenancy.

When issuing access tokens, it is an interesting option to use JWT tokens. As this link shows, JWT tokens are not just a unique identifier by which the authorization server can verify your claim, rather they do include the details of the claim. In our case we could include not only information about the client, but also about what tenant they belong to. Our tokens could look something like below, which is awesome, because it would mean that resources in our ecosystem can validate an access token (extract all the information that they need) without having to contact the Jama OAuth service, an enormous performance gain.

{
    "exp": 12345,
    "scope": [
        "read"
    ],
    "tenant": "tenant1234",
    "jti": "d08e06d9-7408-4a01-bcf0-409ad23391ce",
    "client_id": "test1234"
}

This is almost exactly a JWT token that Spring Security OAuth spits out, using its JwtAccessTokenConverter, except for the custom property “tenant“. Unfortunately, there is no clear extension point to add custom properties. Conversely, when using Spring Security to validate an access token, what it gives your application code access to is an OAuth2Request, that does not include any custom properties. There is one more piece of tenant-awareness; in all we needed to address the following:

  1. Make sure that the application understands which is the right tenant when an access token is requested. This one is simple for us: we have standardized on including an HTTP request header that identifies the tenant. If you forget to include this header, you are awarded an error message. If you include this header, we can use it together with your provided user name and password, to authenticate your request. We would then proceed to return you an access token (JWT token), see the first list item here.
  2. Add custom property “tenant” to JWT tokens.
  3. Read custom property “tenant” from JWT tokens and make it available to our application code.

Add Custom Property

Creating tokens is a function of the authentication server (in our case the “Jama OAuth service”). JWT tokens are generated in Spring by the JwtAccessTokenConverter. So, of course we override that class to get our way. It is being configured in our Spring JavaConfig as follows:

@Bean
public JwtAccessTokenConverter jwtAccessTokenConverter() throws Exception {
    // specifically the following line:
    JwtAccessTokenConverter converter = new TenantAwareJwtAccessTokenConverter();
    converter.setKeyPair(keyPair);
    return converter;
}

Our custom implementation starts as follows:

class TenantAwareJwtAccessTokenConverter extends JwtAccessTokenConverter { ...

Inside that class we retrieve the details of our client, which include the tenant, which we can then add to the access token:

@Override
public OAuth2AccessToken enhance(OAuth2AccessToken accessToken, OAuth2Authentication authentication) {
    ClientEntity clientEntity = getClientEntity(authentication);
    Map<String, Object> info = new LinkedHashMap<>(accessToken.getAdditionalInformation());
    info.putAll(clientEntity.getAdditionalInformationForToken()); // the additional information includes "tenant"="..."
    DefaultOAuth2AccessToken customAccessToken = new DefaultOAuth2AccessToken(accessToken);
    customAccessToken.setAdditionalInformation(info);
    return super.enhance(customAccessToken, authentication);
}

When retrieving the details of our client we take the client ID given by Spring, and combine it with the tenant header from the request (that we require users to include when offering their client credentials).

private ClientEntity getClientEntity(OAuth2Authentication authentication) {
    String clientId = (String) authentication.getPrincipal();
    String tenant = TenantHeaderHelper.getTenantFromRequest();
    return getClientEntityFromDatabase(clientId, tenant); // this includes some assertions to make sure the requested client exists
}

Read Custom Property

This assumes that your resource server is also using Spring Security. It may or may not be the same component as your authentication server (in our case the “Jama OAuth service”). JWT tokens are processed in Spring by a small army of classes, but we chose to override theDefaultAccessTokenConverter. This needs to be injected into the JwtAccessTokenConverter, here of course our own TenantAwareJwtAccessTokenConverter. It is being configured in our Spring JavaConfig as follows:

@Autowired
public void setJwtAccessTokenConverter(JwtAccessTokenConverter jwtAccessTokenConverter) {
    jwtAccessTokenConverter.setAccessTokenConverter(defaultAccessTokenConverter());
}

@Bean
DefaultAccessTokenConverter defaultAccessTokenConverter() {
    return new TenantAwareAccessTokenConverter();
}

Our custom implementation starts as follows:

class TenantAwareAccessTokenConverter extends DefaultAccessTokenConverter { ...

Inside that class we can get access to a map that contains the raw data extracted from the JWT token, before Spring throws out our custom properties. Note that the super implementation already returns an OAuth2Authentication object. Inside that object, we substitute the originalOAuth2Request with our custom TenantAwareOAuth2Request.

@Override
public OAuth2Authentication extractAuthentication(Map<String, ?> map) {
    OAuth2Authentication authentication = super.extractAuthentication(map);
    TenantAwareOAuth2Request tenantAwareOAuth2Request = new TenantAwareOAuth2Request(authentication.getOAuth2Request());
    tenantAwareOAuth2Request.setTenant((String) map.get("tenant"));
    return new OAuth2Authentication(tenantAwareOAuth2Request, authentication.getUserAuthentication());
}

Our custom TenantAwareOAuth2Request looks as follows. Thanks to a useful constructor in the base class (“copy constructor”) our custom class remains relatively simple.

/**
 * Add a tenant to the existing {@link OAuth2Request}.
 */
public class extends OAuth2Request {
    public TenantAwareOAuth2Request(OAuth2Request other) {
        super(other);
    }

    private String tenant;

    public void setTenant(String tenant) {
        this.tenant = tenant;
    }

    public String getTenant() {
        return tenant;
    }
}

In your application code you can get access to this object in the usual ways, except casting to TenantAwareOAuth2Request, rather than OAuth2Request. Here is an application example:

TenantAwareOAuth2Request request = getOAuth2RequestFromAuthentication();
String clientId = request.getClientId();
String tenant = request.getTenant();
// do something with this information

And:

public static TenantAwareOAuth2Request getOAuth2RequestFromAuthentication() {
    Authentication authentication = getAuthentication();
    return getTenantAwareOAuth2Request(authentication);
}

private static TenantAwareOAuth2Request getTenantAwareOAuth2Request(Authentication authentication) {
    if (!authentication.getClass().isAssignableFrom(OAuth2Authentication.class)) {
        throw new RuntimeException("unexpected authentication object, expected OAuth2 authentication object");
    }
    return (TenantAwareOAuth2Request) ((OAuth2Authentication) authentication).getOAuth2Request();
}

private static Authentication getAuthentication() {
    SecurityContext securityContext = SecurityContextHolder.getContext();
    return securityContext.getAuthentication();
}

If your resource server is not using Spring Security, there is other libraries to read JWT tokens, and to read the tenant off of these tokens. We have successfully used JJWT for that.

Conclusion

While Spring Security does not make it very easy to add your own properties to JWT tokens, it can certainly be done in an acceptable manner. Having tenant information available in JWT tokens makes these tokens “fully qualified” in a multi-tenant environment, and thus usable without needing additional (tenant) information to be retrieved, when given an access token. This makes it also possible to do multi-tenant-enabled authentication, even on resource servers that aren’t the same component as your authentication server.

You can see how this approach would work for additional properties, on top of just the “tenant” custom property. In fact, make sure that the JWT token contains just enough information so that resource servers can authorize the client without contacting the authorization server.

Designing a REST API isn’t easy. Anyone who claims differently either hasn’t designed one, hasn’t designed one for a moderately complex system, or is a rare genius who has never found anything to be challenging in their lives. One can set out determined to adhere to REST design principles where a client user can intuitively find the endpoints they need, where all “objects” in the system are represented as resources that can be acted upon with one of four HTTP operations (POST, GET, PUT, DELETE) and users can do whatever they’re trying to achieve with as few calls as possible. But challenges will appear. Immediately.

A couple years ago at Jama, we set out to build a new REST API to eventually replace all our SOAP and DWR interfaces. Two key design challenges were:

  • Chatty vs. Chunky API Design
  • Modeling Resources vs. Business Processes

The second challenge is difficult, and it’s one I could talk about all day (and perhaps I will in a later post). But, today I’m going to focus on the Chatty vs. Chunky problem.

Chatty vs. Chunky APIs

REST API

It wasn’t long after people started writing against our initial iteration of the API, that the conversation about “chattiness” came up. API developers have been having conversations about this concept all over the web, but the essence of it is this: Users need to get the data they are after in as few calls as possible. If they have to make too many smaller calls to get all the data they seek, then the API is too “chatty” for their needs. On the other hand, if API calls are too large, and return more data than is needed, the API can be considered too “chunky”. The two ends of this spectrum are also referred to as Fine Grained vs Coarse Grained APIs.The user call would look something like this:

Let’s give an example to illustrate this. Say I have an API call to retrieve user comments on a blog. Comments can be made on a blog post, so, if they are, they should have a page property to indicate the page they appear on. They should also have an author field to indicate who made the comment.

The user call would look something like this:

GET /comments?page=27

And the JSON data in the response might look something like this:

[
  {
    "id": "796",
    "author": "23",
    "page": "27",
    "createdDate": "2016-04-08T09:15:00",
    "text": "This blog page changed me for the better. I've never read anything quite like it."
  },
  {
    "id": "1097",
    "author": "1",
    "page": "27",
    "createdDate": "2016-04-08T16:15:00",
    "text": "I agree! Thanks for posting!",
    "inReplyTo": "796"
  }
]

This example is overly simplified, but you can see in the comment, the author properties have the values 23 and 1 which you can assume are the authors’ unique user IDs. Similarly, the blog page these comments are on is being referred to by its page ID of 27. The second comment is also in reply to some other person’s comment, so that inReplyTo property has a value of 796 to reference another comment ID.

This payload is very short and simple, but it presents some problems if the API user is interested in knowing more about the author than just the author’s user ID (you can be sure that they at least want to know the author’s name!)

First, if you’re the user of this API, you’re not given any indication of how to retrieve the complete user information associated with user ID 1, nor the complete information for what content is on page ID 27. This is a problem with “discoverability.”

But even if that discoverability problem is solved, you would still need to make a separate API call to retrieve that user. Further, if you are retrieving a collection of hundreds of comments, you would potentially need to make a user call for each comment you retrieve. You would need to make hundreds calls to the API to get the information you’re looking for. Hence the term “chatty”.

But let’s look at the other end of the spectrum. The API could attach the complete information of all object properties to the response and you’d get something like this:

[
  {
    "author": {
      "id": "23",
      "active": "true",
      "firstName": "Lisa",
      "lastName": "Turtle",
      "avatarUrl": "http://base_url.com/lisa.jpg",
      "registrationDate": "2012-04-19T09:16:00",
      "hobbies": "It's turtles all the way down"
    },
    "page": {
      "id": "27",
      "title": "The Meaning of Life",
      "createdDate": "2016-04-07T14:07:00",
      "author": {
       ...another user object...
      }
      ...and so on...
    },
    "createdDate": "2016-04-08T09:15:00",
    "text": "This blog page changed me for the better. I've never read anything quite like it."
  },
  {
    "author": {
      "id": "1",
      "active": "true",
      "firstName": "Jason",
      "lastName": "Goetz",
      "avatarUrl": "https://www.jamasoftware.com/app/uploads/2016/04/FEAT-Jason.jpg",
      "registrationDate": "2004-02-19T07:16:00",
      "hobbies": "Public debate, dancing, skeet shooting"
    },
    "page": {
      "id": "27",
      "title": "The Meaning of Life",
      "createdDate": "2016-04-07T14:07:00",
      "author": {
        ...another user object...
      }
      ...and so on...
    },
    "createdDate": "2016-04-08T16:15:00",
    "text": "I agree! Thanks for posting!",
    "inReplyTo": {
      ...the first comment data repeated again?...
    }
  }
]

As the API user, you now have all the information you need. The full author information is available, you know the full details of the blog page that the comment was posted on, and you can even see the entire other comment that this comment was in reply to directly in the inReplyTo value.

But you have a myriad of new problems.

The first is the sheer size of the payload returned. This is a fairly simplified example. User and page objects would likely have many more properties than these examples show. If you are retrieving hundreds of comments and you’re getting a full object for every property on each comment, this is going to be a lot of data. As the client user, you may not have bandwidth concerns about getting this much data across the wire, but it certainly could take the server more time to assemble all that data, and long-running transactions are much harder on a server’s CPU & memory. Especially when the server is dealing with multiple concurrent requests.

It should also be noted, if it turns out that 99 out of 100 retrieved comments were all authored by the same user and all the comments are posted on the same page, then most of the user and page objects in your results are going to be redundant. The time the server spent assembling user and page data was mostly wasted.

Data inconsistencies are also bound to come up. In this case, what if there aren’t any restrictions on how many embedded replies you can have in your comments section? If someone replies to the first comment, then someone replies to that reply, then someone replies to that reply… you get the point. How should that data be represented? You could choose to go one level deep but API users may be confused about the point at which the API decides to cut off the addition of further data, and how they should write a client to consume it.

These are the problems with a chunky API.

The API could also try to find some kind of compromise and attach only partial data. While the full author’s info may be retrievable from some user call, the comment payload may only contain the author’s user ID, first name, and last name.

But, there are problems with this approach as well (this all sounds so negative!). Inconsistent partial objects make it harder to intuitively work with the API. The same inconsistencies described with the inReplyTo example above apply here as well. Also, the API may not provide the data you are looking for in the first place. If you just want the author’s name and avatar, but the avatar isn’t provided, you’ll still need to make a separate call to get the full user object just so you have that data.

So, what’s the best approach then?

The Solution

When making design decisions and facing a spectrum like this where both ends of the spectrum provide their own challenges, we can only strive to find balance and be practical. We need to find a solution to the Chatty API design problems while avoiding going down the Chunky API route. We want to remain simple, clean, and RESTful. We also want to make our API flexible enough to allow users to meet their own needs in the chatty to chunky continuum.

While the “partial data” example above attempts to find a compromise between Chatty and Chunky, I’ve already pointed out some issues with a compromised approach. Instead of compromising, what if we instead adhere to everything we like about the chatty API model, but give users the extra facilities to add data to their response?

Let’s look at another approach. With this approach, the calling user makes a request for comments, but specifies they would like the author field included as well:

GET /comments?page=27&include=data.author

The response would look like this:

{
  "links": {
    "data.author": {
      "type": "user",
      "href": "http://base_url.com/comments/{data.author}"
    },
    "data.page": {
      "type": "page",
      "href": "http://base_url.com/comments/{data.page}"
    },
    "data.inReplyTo": {
      "type": "comment",
      "href": "http://base_url.com/comments/{data.inReplyTo}"
    }
  },
  "linked": {
    "user": {
      "1": {
        "id": "1",
        "active": "true",
        "firstName": "Jason",
        "lastName": "Goetz",
        "avatarUrl": "https://www.jamasoftware.com/app/uploads/2016/04/FEAT-Jason.jpg",
        "registrationDate": "2004-02-19T07:16:00",
        "hobbies": "Public debate, dancing, skeet shooting"
      },
      "23": {
        "id": "23",
        "active": "true",
        "firstName": "Lisa",
        "lastName": "Turtle",
        "avatarUrl": "http://base_url.com/lisa.jpg",
        "registrationDate": "2012-04-19T09:16:00",
        "hobbies": "It's turtles all the way down"
      }
    }
  },
  "data": [
    {
      "id": "796",
      "author": "23",
      "page": "27",
      "createdDate": "2016-04-08T09:15:00",
      "text": "This blog page changed me for the better. I've never read anything quite like it."
    },
    {
      "id": "1097",
      "author": "1",
      "page": "27",
      "createdDate": "2016-04-08T16:15:00",
      "text": "I agree! Thanks for posting!",
      "inReplyTo": "796"
    }
  ]
}

This is a lot to ingest at once, but the payoff is worth it. You can see here that the comments payload is now listed under data. Also, there are now separate properties in the response called links and linked.

The data section is exactly the same as the one given under the Chatty API example above. But, with the helpful links and linked properties, the lack of associated data is much less of an issue.

The links section takes care of the “discoverability” problem I mentioned above. For any property that simply displays an ID (like authorpage and inReplyTo) the links section will describe how you can plug that ID into a separate API call to retrieve the information you’re seeking.

The linked section is where we really begin to solve the problems associated with Chatty APIs. In the request, you have asked to include any user objects that are referenced in any of the comment author fields. The resulting response now gives a data store of user objects in the linked section for any user IDs specified under author. This removes the need to make any further calls to the users endpoint to get all the information you’re seeking. But it also solves the redundancy problems since a user will only appear once per user ID. In other words, you could have 99 comments where the author value is the same, but you’d only have one inclusion of that author’s data in the linked section. This also (potentially) takes less time for the server to assemble than a full attachment of all author data to each individual comment since we’re only loading and assembling the author data once for the data store.

This solution offers the simplicity of the chatty API at its base. It’s only giving you the basic information you’re requesting. But, it additionally gives you discoverability and the flexibility to ask for further data in the same request so we solve the main problems associated with chatty APIs. We’ve managed to address all of these previously mentioned problems:

  • Discoverability
  • Not enough data i.e. the need for repeated API calls or chattiness
  • Large payloads associated with chunky APIs
  • Server processing time associated with chunky APIs
  • Redundancy

Here at Jama, while designing our REST API, we’ve set out to find balance in our API design with an emphasis on ease-of-use, practicality and flexibility for our users. This chatty vs chunky tradeoff is just one aspect of the challenges we face to build an API that works for us and our users. We’ve come up with a REST JSON response data structure very similar to the one in the example above. It allows us to be uncompromising in resources being clean and lean, while still allowing our users to retrieve the data they seek. It’s loosely based on an initial version of the JSON API specification and we feel it elegantly satisfies our and our users’ needs.

Comments? Questions? I’d love to hear your feedback!