Webinar recap: External systems integration

21 January 2021

On January 13th, our identity specialists Dave Downs and David Manning presented ‘API integration and User migration for Azure AD B2C’. This webinar was the fourth session in our Microsoft Azure AD B2C series. Here’s our recap of the Q&A session that took place.

This webinar is now available to view on YouTube, as well as the previous sessions within the series. To be directed to a specific session, head over to our events page and easily access any webinar you’d like to view.

For any suggestions on topics, you’d like to see covered, or if you have any questions for our team, contact us on info@condatis.com.


Unsecured APIs

Can B2C call an unsecured API?

There are some caveats with this. B2C would prefer if you always used an authenticated API. In the case you do want to use an unsecured API in production – allowing anonymous calls to an API – it’ll require you to add an “AllowInsecureAuthInProduction” setting to the metadata and set it to ‘true’. This tells B2C you understand that this API call won’t be authenticated in a production system, and B2C will then execute it. To summarise, yes, B2C can call an unsecured API, but you must be explicit in telling B2C this is your intention in your production environment.

B2C and existing APIs

Can B2C integrate with existing APIs that already exist, rather than having to write them explicitly?

Yes, it can. If you have an API that’s just returning some information using JSON, its likely B2C will be able to integrate with that without any major issues. Some work may have to be done around the authentication, as B2C only supports certain authentication modes, so if your API also supports this, B2C shouldn’t have much problem interacting with it.

Where it can begin to get complicated, with less chance you’ll just be able to integrate just with the API, is when doing things such as validation. B2C expects the response in a certain format, and it expects the status code that comes back from the request to be in a specific range. For example, the response status must be in the 4xx range for validation failures. If your API doesn’t do that, and only returns 200, the best solution is to write a wrapper API, for B2C to then call the wrapper API, and the wrapper API then call your original API.

Essentially, you’re writing an API that B2C can talk to in the format that it wants. This communicates with the actual end-product, whether directly or through another API that B2C can’t interact with natively. An example of when we do this is for an application such as Microsoft Dynamics 365. Dynamics has an API that you can interact with, but it’s not set up in a way that’s easy for B2C to communicate with. The solution? Set up an API on top of that which does caching, and asynchronous calls, but primarily, it converts how B2C likes to talk to APIs into how the Dynamics API talks.

Does it matter where the API resides? Could we call APIs that are running on AWS?

It doesn’t matter where the API is hosted, as long as it’s accessible to B2C, meaning it isn’t in a private network and is accessible to the internet, then B2C should have no issue communicating with it.

Is there a limit to the number of APIs that can be called in a user journey?

No more so than there are any other limits on the user journeys, there can be several steps within the journey, they can be in different orders, you can do it however you want. You’re just adding unique technical profiles, that you are configuring to execute in a particular step.

Migrating from multiple legacy systems

If you were migrating from multiple legacy systems, each with their own authentication system into a B2C tenant, how would you go about doing this?

It depends on what the authentication systems are, and how easy it is to identify that a user should be using one of them. You’d combine all the steps we’ve talked about throughout the series, into the solution. (link to events page). You may have an API that identifies which system the user should be using, and then have B2C use that information to redirect them down a particular federated route. Say you have four different legacy systems, each of which supports either OpenID Connect, SAML or both. You’d then set these connections up in your custom policies, and then have some logic to route the user in the correct way.

If it’s purely multiple API calls and you’ve got these four different systems, each of which has its own unique database, but you just want to make API calls behind the scenes to migrate the user, then it’s about figuring out your logic, and what you want it to be. This may be to make API calls in sequence as validation steps in the B2C policy, checking if the user details are valid against each system, similar to the just-in-time migration as we showed in the webinar. Or it might make more sense to make all of the above part of a single API containing all the logic, and have B2C call that one API during validation instead.

Evidently, there are many ways of doing this, depending on what exactly these legacy systems are, whether there are endpoints that can be federated to, whether there is information about which user belongs to which system, and what you can do with API calls.

For solutions as complex as the one described in this question get in touch with us and we can help you design and implement a system tailored specifically to your needs.


Download the webinar slides.