In the recent IWS 2010, I saw Zachman presenting an interesting note about enterprise architecture. He quoted Brooks by saying that redundancy drives the system to chaos, entropy. Well, actually some “managed” or better said “controlled” redundancy is necessary to allow scalability. In a nice context, that redundancy is a “mirrored” one, a copy, an instance. That means each part that is redundant is the exact same copy of all other redundant parts. But not all is so nice: there is a meaner redundancy, the one created by specialization or customization. That one is partial redundancy, where one part of the instance is the same for all instance, but there is a part that is unique, different from the others. When that happens, we usually say we have a new version.
So, version is a way to differentiate between two things that are similar but not quite the same. Version is something like an ID. That is partial redundancy. If there were no redundancies, or if all redundancies were absolute, then we would not need versions.
Now, if two versions of something were to be treated the same, no difference in processing them, then there is no need to differentiate. In other words, nobody cares about the part that is different in the instances. Thus, version can be defined as the ID plus the special processing or treatment that ID requires.
Let’s look it using another perspective. To create a version, we need to change something. That is: data, structure, a process, something that is changed. The original form is kept and the new form is assigned a version ID to differentiate it from the original. In other words, versioning is basically a technique to handle the differences that appear when something changes. The real issue is then modifiability.
Modifiability? When we modify something and the rest of the system has to adapt to that change, we say we have no modifiability: the system is tightly coupled. Modifiability is a quality property that allows some things (even instances) to change without implying the system will break. It is the capacity of the system to support the change of one of its parts (here, each individual instance is counted as one part). When something changes and we start creating special version of things to handle the change and keep the old parts still working, we are adding partial redundancy. And that is bad, remember?
So, as a piece of advice, try to stay away from versioning as hard as you can.
But, what if you need to use versions, how do you work with them in REST? Well, REST is a style that welcomes redundancy, but tries to keep the flexibility to change without disrupting the system. That is, it offers modifiability.
Let’s see what can change. In REST we have different servers that can come and go. Topology can change. Since we are using layers, the fact that a server changes in a second level or beyond does not affect us. If the server that goes away is the one that we are talking to, there should be another that takes its place (redundancy). If there is no other, then we are in trouble. So, here we have total redundancy, not partial.
Now, let see the next level. The resources that are actually operated by a server may change. Say, we have no longer a text file but a database, or the resource is no longer a static image but a video, any change that a resource can suffer. Since we manage resources through representations, that change may not be a problem. It is solved by keeping the same representations as before and even adding new ones taking advantage of the changes in the resource.
Ok, we may say: what if the representation is the that changes!? Well, here we have two types of possible changes: Structural or content. If the representation structure changes, then we may be in front of a totally new structure that needs a new media type. Maybe the structure changes a little, like in WSDL 1.1 and 2.0. Well, in that case, the new 2.0 needs to get a new IANA registry (actually it has an application/wsdl+xml assigned for 2.0) . The client will know what is being sent by reading the media type. Humm, ok, it may not happen, new types for new versions of already existing media types may be difficult, in that case it could be the new type description contains backward compatibility, or simple that the old spec is not supported anymore.
Now, let’s analyze content modification in the representation. What does it mean? I guess that the usual data that was being sent by the application is not the same as usual. Humm. That means the client is statically bounded to a set of values? Well, that is not a problem of REST, right? The content should then have a way to tell the client what has changed. A version you said, well yes, but application driven. REST has no restriction of what values or fields you sent out, one of them could be a version number, but that is the application data, application semantics, totally out of REST concern.
A little note here. We may have a well-known generic data type like XML. And we may have one normal document structure the client is reading. What if we add some new elements with some new data, and remove others? Well, here we are not changing the data type (XML is still XML). What we changed is the XML schema. And of course, the schema is specified in the XML body. So, a client can perfectly parse the XML if it understands the schema. Again, nothing related to REST.
Let’s continue. Now, I want to change the actual process. Say, we need to do some posts to complete a process. Suddenly, a new Post and a verification Get is required. In this case, unless the client is a static thing that is not following Hypermedia as the Engine of Application State constrain, that is solve alone by adding a couple of links to one particular resource representation.
Ok, the resource that is changed with two more links may be the one to be versioned, right? Well no. If you still need the same resource representation, either create a new representation, create a modified special representation based on the client request (may be a query parameter) or create a new resource that will be used upon a client selection. Think of a virtual store that has a normal four step process to check out. If you have a credit card registered, you may want to skip the credit card form, or even perform a one click check out. Note that in these cases, we are simple adding options that will drive the client to new process flow, old clients (that only understand the full checkout process, although that is not nice implemented) may be able to follow that old process link. So, we keep all old thing add add new ones without breaking the app, the magic of Hypermedia.
Did I forget something? I guess not. All the cases can be resolved without making artificial URLS with versions or creating custom media types. The only cases where a version id may be needed are the ones that are application specific and not related to REST.
Cheers!
Architect's Thoughts
Into an architecting mind... Everybody talks about architectures. Why won't we... Ditto of acoscomp.com
Sunday, May 1, 2011
Thursday, April 21, 2011
Forced Domain and Forced Paradigm anti-patterns
These two common (and usually overlooked) “anti-patterns” are frequently followed by developers. Actually, I’m not sure if we can officially call them anti-patterns, but they are for sure common patterns that are not good to follow.
Let us explain what the symptoms are for each.
The Forced Domain occurs when a domain that is not the natural one for the solution is forced into that solution. Here forcing means using the foreign domain concepts and processes to solve the problem in a usually unnatural way. The most common example is the IT domain forced into business logic, user interface, user experience or any part of the solution that needs to be in the business domain. For instance, when we build software that presents data in tables, shows codes (data keys), exposes the notion of next record, the notion of indexes or even the notion of fields, we are forcing the database concepts to the client. To use the software, the user needs to learn those concepts first. The same happens for processes, when backups, initialization, remote calls and such are forced to the user.
Another case is when we are integrating applications. Each solution we integrate may have its own domain, and they may not be the same domain as the other application we are integrating with. Integrated applications should communicate in a lousily coupled manner. So, for one solution to use the other one, both will need to share some common business concepts and those should be only ones in between. For example, a banking system that offers loans may need geographical information about the land we are using as warrant. Forcing the banking system to understand about geo locations, azimuths and GPS readings is a non-sense. To obtain the needed information, the bank should request it using business information it manages, like a particular address. It is the job of the geographical system to convert that requirement data, in the form of a text containing an address, into world’s longitude and latitude pairs and convert the response’s information into readable data for the bank.
Ok, that may raise a question: Isn’t that forcing the bank domain information into the Geo-System? No, actually. The Geo-System is probably offering that query as a service, which means it is decoupled from the implementation, and using standard documents (which may even be generic) to pass info. Note that the Geo-System will not get bank specific information, just a street address.
Why is that Forced Domain a bad thing?
A domain is not just a set of concepts. It is also a very complex environment that has processes that are natural in the environment, working smoothly with the environment’s rules. The concepts fit and all data is structured in a way that facilitates the processing. If not, then the domain would not work. Think of a medical domain, in surgery, where the concepts do not match or where the rules make the surgeon juggle with tools not made for the job at hand. In the example, think of a surgeon that is giving a fish knife to work with, and that is forced to kill the fish before opening. The tasks of cleaning a fish would sound similar to some of those performed by a surgeon, but it is clear the final idea is different and that it is not a good idea to practice surgery with a fish cleaning knife.
Domain forcing also produces coupling of a very special kind. In some cases it will require application A to know about Application B to use it, and in some others it will force Application B to know about A in order for application A to use B! It is like teaching me mechanics so the mechanic can fix my car by asking me car construction questions. In collaboration contexts, our banking application is forced to store and process geographical data because it needs those values to query the geographical system.
The Forced paradigm anti-pattern is more related to development.
That is, when something is not done naturally as it should, but done my way. A very common example occurs when the developer, coding a services consumption client, uses an OO language. The service is usually defined as a port to which we send messages in a pattern. To do that in Java, for instance, we map the port and message idea to a method, defined in a stub, which acts like a local class. Thus, the sending of a message notion is lost. The developer sees the operation as just invoking a method locally. There, we are forcing one paradigm into another; we are forcing the method call operation into a message send operation. In Java we are supposed to have objects and invoke methods, not send messages. That is true, but then we should build an object that sends the message for us, and we just invoke that functionality. Sounds the same, but semantically it is not.
Is that bad too?
Of course it is. Forcing a paradigm causes impedance mismatch. It also increases overhead due to paradigm conversions, and removes semantics that would help the developer to create good performing code (like the stubs, may trick developers to use the call as a local one, when it is remote, and cause a bad performance). Actually, if you take a look at the current specifications for Java, we may see there are many that map the actual interface to objects following this antipattern, hiding away the actual process from developers. That is done to make it “easy” for them to use the API, but the resulting API would not be good enough.
Do you have any examples of these antipatterns? What is your take on how bad are they for your health?
Let us explain what the symptoms are for each.
The Forced Domain occurs when a domain that is not the natural one for the solution is forced into that solution. Here forcing means using the foreign domain concepts and processes to solve the problem in a usually unnatural way. The most common example is the IT domain forced into business logic, user interface, user experience or any part of the solution that needs to be in the business domain. For instance, when we build software that presents data in tables, shows codes (data keys), exposes the notion of next record, the notion of indexes or even the notion of fields, we are forcing the database concepts to the client. To use the software, the user needs to learn those concepts first. The same happens for processes, when backups, initialization, remote calls and such are forced to the user.
Another case is when we are integrating applications. Each solution we integrate may have its own domain, and they may not be the same domain as the other application we are integrating with. Integrated applications should communicate in a lousily coupled manner. So, for one solution to use the other one, both will need to share some common business concepts and those should be only ones in between. For example, a banking system that offers loans may need geographical information about the land we are using as warrant. Forcing the banking system to understand about geo locations, azimuths and GPS readings is a non-sense. To obtain the needed information, the bank should request it using business information it manages, like a particular address. It is the job of the geographical system to convert that requirement data, in the form of a text containing an address, into world’s longitude and latitude pairs and convert the response’s information into readable data for the bank.
Ok, that may raise a question: Isn’t that forcing the bank domain information into the Geo-System? No, actually. The Geo-System is probably offering that query as a service, which means it is decoupled from the implementation, and using standard documents (which may even be generic) to pass info. Note that the Geo-System will not get bank specific information, just a street address.
Why is that Forced Domain a bad thing?
A domain is not just a set of concepts. It is also a very complex environment that has processes that are natural in the environment, working smoothly with the environment’s rules. The concepts fit and all data is structured in a way that facilitates the processing. If not, then the domain would not work. Think of a medical domain, in surgery, where the concepts do not match or where the rules make the surgeon juggle with tools not made for the job at hand. In the example, think of a surgeon that is giving a fish knife to work with, and that is forced to kill the fish before opening. The tasks of cleaning a fish would sound similar to some of those performed by a surgeon, but it is clear the final idea is different and that it is not a good idea to practice surgery with a fish cleaning knife.
Domain forcing also produces coupling of a very special kind. In some cases it will require application A to know about Application B to use it, and in some others it will force Application B to know about A in order for application A to use B! It is like teaching me mechanics so the mechanic can fix my car by asking me car construction questions. In collaboration contexts, our banking application is forced to store and process geographical data because it needs those values to query the geographical system.
The Forced paradigm anti-pattern is more related to development.
That is, when something is not done naturally as it should, but done my way. A very common example occurs when the developer, coding a services consumption client, uses an OO language. The service is usually defined as a port to which we send messages in a pattern. To do that in Java, for instance, we map the port and message idea to a method, defined in a stub, which acts like a local class. Thus, the sending of a message notion is lost. The developer sees the operation as just invoking a method locally. There, we are forcing one paradigm into another; we are forcing the method call operation into a message send operation. In Java we are supposed to have objects and invoke methods, not send messages. That is true, but then we should build an object that sends the message for us, and we just invoke that functionality. Sounds the same, but semantically it is not.
Is that bad too?
Of course it is. Forcing a paradigm causes impedance mismatch. It also increases overhead due to paradigm conversions, and removes semantics that would help the developer to create good performing code (like the stubs, may trick developers to use the call as a local one, when it is remote, and cause a bad performance). Actually, if you take a look at the current specifications for Java, we may see there are many that map the actual interface to objects following this antipattern, hiding away the actual process from developers. That is done to make it “easy” for them to use the API, but the resulting API would not be good enough.
Do you have any examples of these antipatterns? What is your take on how bad are they for your health?
Back to Posts! Blogging Season Ahead...
Ok, six months of not posting anything seems like too much.
There are lots of things that are keeping me busy. But I hope next months will allow me to finish the 4-5 posts that are just started but never finished.
I have one almost ready, but I feel I cannot hold it longer. So, it is free then to be published, although soon I may write about the topic once more, to clarify more things.
Off we blog!
There are lots of things that are keeping me busy. But I hope next months will allow me to finish the 4-5 posts that are just started but never finished.
I have one almost ready, but I feel I cannot hold it longer. So, it is free then to be published, although soon I may write about the topic once more, to clarify more things.
Off we blog!
Saturday, October 9, 2010
Distributed, Integrated or Networked?
Distributed is a very common adjective. I usually tell my students to be careful when using it to refer to anything that gets in a network, which is usually the case. Even more, I tell them to beware of saying that REST or SOA are the answer to distributed needs, since it is not totally true. The question I receive is then: When can we tell something is distributed or not? Ok, this post will try to clarify the difference between the three terms you can read in the title, as they are very often confused.
Let’s begin by thinking about systems. A system is a set of related components that perform some interaction, following some rules. The components may be anything, from hardware to software, for virtual to real, from people to things. Now, a solution is the implementation of processes and flows that yield the resolution of (or helps solving) a problem. The solution may be a monolithic thing, or it may be a system.
If you take a system, and then you analyze what is it doing, you may find that it may not be good to have all processing done at the same place. So, you decide to split it, and place some parts somewhere else. Think of tasks, all performed by one big process in one big environment. You may need to split the system and send some tasks to be performed in other environment, by another process. That way, you can obtain some benefits like scaling, encapsulation, “replaceability” and better use of assets. Some say you have also reusability, but that is not actually true all the time, as it depends on how you split the system, and that is definitely not the goal of splitting. But, you also lose some other things, like performance due to increased interprocess communication (IPC). Fair. You have there a distributed system, a whole that was split but still needs to work as a whole.
Now, imagine you have two systems. Each system solves a problem. You may discover that joining forces, the systems can solve higher level problems. So, you decide they must communicate somehow, to try working together. Fair. You then create communication channels and you have an integrated macro system. That is, each system is treated as a whole that may be composed, reused, to create a bigger system. We need to keep the individual system’s inner workings a secret, and treat is as a whole, so we need it encapsulated, reusable and loose coupled. We use standard interfaces, and we use higher level processes. Also, the frequency of interaction is low. And of course, the user knows this bigger system is a large composed thing, where we know the individual parts. We call an Integrated system.
But wait: isn’t that the same idea as above? Well, not really. It may sound the same, but when implementing, and semantically, they are not. See, usually distribution requires the system to work as a whole, and thus communication is a pain the user should not be aware of. It must be fast, and should be tight coupled. Since the parts of a distributed system are parts of a whole, they know each other and that allows for faster communication if that information is used in coupling.
Let’s check an example. For distributed systems, suggesting Services through messaging as the communication channel may not work, since that actually increases the overload for communications. (Distribution with services seems to be a bad idea, an antipattern).
On the other hand, you have the integrated system. Here each subsystem is a standalone system. There may not be the need to have fast communication, but reliable and loose coupled one. There, a service fits, since we don’t need to know the internals of one system to integrate it with another one.
Fine. What about the Networked system? Well, that is also a system, but made as a network. A networked system has its components interacting as a network. That is, you have nodes and communication between all of them, you have routing to get to a node, and you have a very dynamic node topology. Yes, you can have integrated systems and distributed systems running in a network, but that does not make any of those systems a network system. You may have the same parts of a distributed/integrated system in one big machine, no network, and still be integrated or distributed.
In a networked system, you can easily have more than one copy of a node. Routing and balancing gives you the ability to increase and decrease the node number and thus scaling in place, incrementally, focused. You can add more nodes without affecting the ones in the network, and you can create subsystems with them. In fact, all you can do with a computer network you can do with a networked system. I guess the difference between integrated/distributed systems and networked ones is clear. No? Ok.
Let’s do it again.
- A distributed system is a whole system that was split to improve its parts, that needs fast and reliable communication, just like if the component being called is near at hand. Usually communication is direct, tight coupled. It has a high interaction between parts. Finally, the reuse of part is optional.
- An integrated system is the one made of whole systems largely distributed, encapsulated. Low frequency of interaction between parts, loose coupled (high cost) communication. The parts are made reusable, and the user is usually aware of the composition.
- A networked system, all nodes form a network. An application is a composition of node services. Topology may vary, dynamically, nodes may not be standalone components, and it needs low coupling and other messaging services like routing and balancing. Nodes can be added and functionality adjusted without much impact. Scalable locally, and may contain subsystems.
Ok. One big networked system is the web. Now, in the web you can have integration (B2B transactions), distribution (an enterprise with subsidiaries) and other networking systems (maybe mashups, or
Grids).
Clear now? Great! Based on that clarification, I pretend to write a future post: REST is tuned for networked systems! Stay tuned...
Let’s begin by thinking about systems. A system is a set of related components that perform some interaction, following some rules. The components may be anything, from hardware to software, for virtual to real, from people to things. Now, a solution is the implementation of processes and flows that yield the resolution of (or helps solving) a problem. The solution may be a monolithic thing, or it may be a system.
If you take a system, and then you analyze what is it doing, you may find that it may not be good to have all processing done at the same place. So, you decide to split it, and place some parts somewhere else. Think of tasks, all performed by one big process in one big environment. You may need to split the system and send some tasks to be performed in other environment, by another process. That way, you can obtain some benefits like scaling, encapsulation, “replaceability” and better use of assets. Some say you have also reusability, but that is not actually true all the time, as it depends on how you split the system, and that is definitely not the goal of splitting. But, you also lose some other things, like performance due to increased interprocess communication (IPC). Fair. You have there a distributed system, a whole that was split but still needs to work as a whole.
Now, imagine you have two systems. Each system solves a problem. You may discover that joining forces, the systems can solve higher level problems. So, you decide they must communicate somehow, to try working together. Fair. You then create communication channels and you have an integrated macro system. That is, each system is treated as a whole that may be composed, reused, to create a bigger system. We need to keep the individual system’s inner workings a secret, and treat is as a whole, so we need it encapsulated, reusable and loose coupled. We use standard interfaces, and we use higher level processes. Also, the frequency of interaction is low. And of course, the user knows this bigger system is a large composed thing, where we know the individual parts. We call an Integrated system.
But wait: isn’t that the same idea as above? Well, not really. It may sound the same, but when implementing, and semantically, they are not. See, usually distribution requires the system to work as a whole, and thus communication is a pain the user should not be aware of. It must be fast, and should be tight coupled. Since the parts of a distributed system are parts of a whole, they know each other and that allows for faster communication if that information is used in coupling.
Let’s check an example. For distributed systems, suggesting Services through messaging as the communication channel may not work, since that actually increases the overload for communications. (Distribution with services seems to be a bad idea, an antipattern).
On the other hand, you have the integrated system. Here each subsystem is a standalone system. There may not be the need to have fast communication, but reliable and loose coupled one. There, a service fits, since we don’t need to know the internals of one system to integrate it with another one.
Fine. What about the Networked system? Well, that is also a system, but made as a network. A networked system has its components interacting as a network. That is, you have nodes and communication between all of them, you have routing to get to a node, and you have a very dynamic node topology. Yes, you can have integrated systems and distributed systems running in a network, but that does not make any of those systems a network system. You may have the same parts of a distributed/integrated system in one big machine, no network, and still be integrated or distributed.
In a networked system, you can easily have more than one copy of a node. Routing and balancing gives you the ability to increase and decrease the node number and thus scaling in place, incrementally, focused. You can add more nodes without affecting the ones in the network, and you can create subsystems with them. In fact, all you can do with a computer network you can do with a networked system. I guess the difference between integrated/distributed systems and networked ones is clear. No? Ok.
Let’s do it again.
- A distributed system is a whole system that was split to improve its parts, that needs fast and reliable communication, just like if the component being called is near at hand. Usually communication is direct, tight coupled. It has a high interaction between parts. Finally, the reuse of part is optional.
- An integrated system is the one made of whole systems largely distributed, encapsulated. Low frequency of interaction between parts, loose coupled (high cost) communication. The parts are made reusable, and the user is usually aware of the composition.
- A networked system, all nodes form a network. An application is a composition of node services. Topology may vary, dynamically, nodes may not be standalone components, and it needs low coupling and other messaging services like routing and balancing. Nodes can be added and functionality adjusted without much impact. Scalable locally, and may contain subsystems.
Ok. One big networked system is the web. Now, in the web you can have integration (B2B transactions), distribution (an enterprise with subsidiaries) and other networking systems (maybe mashups, or
Grids).
Clear now? Great! Based on that clarification, I pretend to write a future post: REST is tuned for networked systems! Stay tuned...
Sunday, August 22, 2010
IWS 2010: The Why of REST expanded Abstract
Why do we talk about The Why of REST?.
I recall the old days, when I first met REST. It was introduced to me as a revolutionary way to create web services without all the complexity of SOAP and WSDL. It was an airline system used as an example. Very simple to understand. And yet very suspicious.
I had lots of questions in the following hours. The explanation was somehow light, and it seemed to me like a normal web request, nothing similar to a service. Later I read the REST dissertation (I actually did some annotations that I was to put in a blog post, and never did), and found out REST was an architectural matter, far more than just a web request.
I kept on the searching and found several examples claiming to be REST incarnations, but they were clearly not. I found several implementations were REST didn’t even fit.
How so? May you ask. Isn’t REST a solution for everything? Isn’t REST an API, or an API creation technique? Isn’t REST a Services creation technology? Well, no.
REST was not created to be used on SOA as a service creation technique. The origin of REST is quite different, and it is tuned for specific problems found when creating systems like the web.
Actually, REST is an architectural style, and it is made out of a set of different sub-styles, each of them is there for a reason. Each style is a set of constrains, that has some benefits and some drawbacks, all carefully balanced to fulfill the need of a particular type of application.
Many tutorial show REST as something for the developer, when implementing a service on the web, full of tricks and tips on how to replicate the normal language and system operations over the HTTP protocol. Instead, REST should be seen at architectural level, where trade-offs are generated due to constrains, and for that we need to understand why each of those constrains are in REST to begin with.
This presentation will try to work on REST from the architectural point of view (Yes, no GET-POST-URI examples). Why is it a Networked System application so different, and why the REST constrains may help (after checking, of course, what those constrains are and how they impact the architecture). At the end, we will have some minutes to discuss in which SOA flavors does REST fit, and why is it not good for everyone. Several “whys” to explain, the big Why of REST.
I recall the old days, when I first met REST. It was introduced to me as a revolutionary way to create web services without all the complexity of SOAP and WSDL. It was an airline system used as an example. Very simple to understand. And yet very suspicious.
I had lots of questions in the following hours. The explanation was somehow light, and it seemed to me like a normal web request, nothing similar to a service. Later I read the REST dissertation (I actually did some annotations that I was to put in a blog post, and never did), and found out REST was an architectural matter, far more than just a web request.
I kept on the searching and found several examples claiming to be REST incarnations, but they were clearly not. I found several implementations were REST didn’t even fit.
How so? May you ask. Isn’t REST a solution for everything? Isn’t REST an API, or an API creation technique? Isn’t REST a Services creation technology? Well, no.
REST was not created to be used on SOA as a service creation technique. The origin of REST is quite different, and it is tuned for specific problems found when creating systems like the web.
Actually, REST is an architectural style, and it is made out of a set of different sub-styles, each of them is there for a reason. Each style is a set of constrains, that has some benefits and some drawbacks, all carefully balanced to fulfill the need of a particular type of application.
Many tutorial show REST as something for the developer, when implementing a service on the web, full of tricks and tips on how to replicate the normal language and system operations over the HTTP protocol. Instead, REST should be seen at architectural level, where trade-offs are generated due to constrains, and for that we need to understand why each of those constrains are in REST to begin with.
This presentation will try to work on REST from the architectural point of view (Yes, no GET-POST-URI examples). Why is it a Networked System application so different, and why the REST constrains may help (after checking, of course, what those constrains are and how they impact the architecture). At the end, we will have some minutes to discuss in which SOA flavors does REST fit, and why is it not good for everyone. Several “whys” to explain, the big Why of REST.
Sunday, July 25, 2010
What if you need Transactions and also REST?
Some time ago, in a discussion at REST-Discuss, en example for transactions in REST was given. The solutions are usually complex and tedious. This is part of one of my posts, fixed a little bit to include the sample case.
Case: We have a travel booking service. The traveler usually selects a flight and then a hotel to stay. If the person cannot confirm the flight, the room booking should not be confirmed. Here, we have a transaction. It is all or nothing. Good.
How do you model that in REST?
Usual method is to have two resource, the flight booking and the room booking. And, there is also another resource you create, that is a transaction resource. That transaction resource contains the transaction data, either what to do or what was done. Finally, you commit or confirm the transaction resource, and all is committed.
Well. For this, I assume there are two different data sources, and we can commit to each one separately, but when a transaction involves individual transactions at each source, then you use the famous two phase commit. Each source is a participant, right? The airline and the hotel are two different companies, so your transaction is not simple.
We can have one server dedicated to the management of transactions. We can create the transaction object in it and add the steps and then ask him to perform (it can even perform at each step and verify all worked at the end, or rollback at any step if needed). The problem is, given we need to send to it, manually, all the transaction steps and actions, that idead may suffer some scalability problems.
On the other hand you have the client that needs to do all that processing to commit the transaction. That is, the client needs to be transaction-aware.
My feeling is that exposing the data entities as resources, and leaving to the client all the commit processing, is exposing too much the application detail. May not break REST, but adds unnecessary complexity.
In the example, we model, for the two phase commit, two sources, the airline resource and the hotel room resource. It is them implied that both are like databases, even more, separated database engines. And, the client will have to drive the transaction management to change data in both and then to commit. That is implicitly forcing the concepts of a resource, but still it sounds like REST.
So far, so good. Now, my question would be: should I need to do all that to actually reserve a package using REST? Well, thinking about how would I do it, I'd actually follow an online reservation workflow and see what happens:
a. I enter and search for a flight. System returns a list of flights and I select one. At this time a draft reservation is created with my flight in it. (Think a PUT of the empty reservation followed by a POST of the flight).
b. Then the system offers me to add a hotel reservation, and from the provided list I select one too. That is added to my draft reservation (another POST).
c. Finally, I add my credit card information and post a confirmation (Another POST).
This last action is served by server number 5 of 10 currently serving. That server 5 needs to complete the POST, and if unable, it will return an error to the client. Well, that server uses the draft reservation resource information to call a transaction manager to commit all changes. If it fails, server 5 returns the error.
That is totally opaque to the client, which only confirms and receives a yes or no to that request. Depending on that response, the client retries, updates the selection of flights or hotel and confirms again, or even desists and eliminates the reservation. Simple, ha.
But wait, that draft sounds a lot like the transaction resource mentioned above. Well, yes, but semantically it is not. The user adding reservations to a draft is not expecting each step to be on firm. The actual transaction occurs at the end, when we confirm all the bookings.
The difference in this process is that client is freed from knowing the transaction is happening. Resources are just that, no databases nor tables that need transactions and the client doesn't have to choose the use of single or two phase commits. You can scale since you can change the number of servers or transaction managers without touching the client. AND, each client interaction leaves the system in a stable state. Actually, this can be RESTFull too!.
So, if we can hide the complexity of the transaction, why do we need to expose that complexity to the client? I may do it if that brings some benefit. My question will then be, which benefits will I found from one implementation to the other one, or why one of them is not suitable for some particular business case.
Look that the actual transaction coordination is happening under the cover, at server level. That data hiding allows not only flexibility to adjust the process, but also reliability, since failed clients can recover.
Cheers.
Case: We have a travel booking service. The traveler usually selects a flight and then a hotel to stay. If the person cannot confirm the flight, the room booking should not be confirmed. Here, we have a transaction. It is all or nothing. Good.
How do you model that in REST?
Usual method is to have two resource, the flight booking and the room booking. And, there is also another resource you create, that is a transaction resource. That transaction resource contains the transaction data, either what to do or what was done. Finally, you commit or confirm the transaction resource, and all is committed.
Well. For this, I assume there are two different data sources, and we can commit to each one separately, but when a transaction involves individual transactions at each source, then you use the famous two phase commit. Each source is a participant, right? The airline and the hotel are two different companies, so your transaction is not simple.
We can have one server dedicated to the management of transactions. We can create the transaction object in it and add the steps and then ask him to perform (it can even perform at each step and verify all worked at the end, or rollback at any step if needed). The problem is, given we need to send to it, manually, all the transaction steps and actions, that idead may suffer some scalability problems.
On the other hand you have the client that needs to do all that processing to commit the transaction. That is, the client needs to be transaction-aware.
My feeling is that exposing the data entities as resources, and leaving to the client all the commit processing, is exposing too much the application detail. May not break REST, but adds unnecessary complexity.
In the example, we model, for the two phase commit, two sources, the airline resource and the hotel room resource. It is them implied that both are like databases, even more, separated database engines. And, the client will have to drive the transaction management to change data in both and then to commit. That is implicitly forcing the concepts of a resource, but still it sounds like REST.
So far, so good. Now, my question would be: should I need to do all that to actually reserve a package using REST? Well, thinking about how would I do it, I'd actually follow an online reservation workflow and see what happens:
a. I enter and search for a flight. System returns a list of flights and I select one. At this time a draft reservation is created with my flight in it. (Think a PUT of the empty reservation followed by a POST of the flight).
b. Then the system offers me to add a hotel reservation, and from the provided list I select one too. That is added to my draft reservation (another POST).
c. Finally, I add my credit card information and post a confirmation (Another POST).
This last action is served by server number 5 of 10 currently serving. That server 5 needs to complete the POST, and if unable, it will return an error to the client. Well, that server uses the draft reservation resource information to call a transaction manager to commit all changes. If it fails, server 5 returns the error.
That is totally opaque to the client, which only confirms and receives a yes or no to that request. Depending on that response, the client retries, updates the selection of flights or hotel and confirms again, or even desists and eliminates the reservation. Simple, ha.
But wait, that draft sounds a lot like the transaction resource mentioned above. Well, yes, but semantically it is not. The user adding reservations to a draft is not expecting each step to be on firm. The actual transaction occurs at the end, when we confirm all the bookings.
The difference in this process is that client is freed from knowing the transaction is happening. Resources are just that, no databases nor tables that need transactions and the client doesn't have to choose the use of single or two phase commits. You can scale since you can change the number of servers or transaction managers without touching the client. AND, each client interaction leaves the system in a stable state. Actually, this can be RESTFull too!.
So, if we can hide the complexity of the transaction, why do we need to expose that complexity to the client? I may do it if that brings some benefit. My question will then be, which benefits will I found from one implementation to the other one, or why one of them is not suitable for some particular business case.
Look that the actual transaction coordination is happening under the cover, at server level. That data hiding allows not only flexibility to adjust the process, but also reliability, since failed clients can recover.
Cheers.
Saturday, July 24, 2010
Trying to Explain App State for REST
I will try a different approach to explain the concept of application state, which is usually confusing and a recurring question from REST newbies.
I want to play blackjack. I have some friends and we all go to the online blackjack service.
Each one of us enters the application, which allows us to play BJ. The main screen requests our IDs. At the next minute, we all but one are ready to play (the one still not ready is because he forgot the ID number).
Let's see what happens there. We are, say, ten clients, each one is using the application, nine of which are ready to play while one is waiting for ID input. As you can see, there is one application, but each client has a different application state now. And each client knows its state clearly.
So, we left our friend finding his number and we started to request cards. We knew there was a dealer at the other end of the line, but we were not able to know if we were all playing with the same dealer!
Still, there was just one mass of cards. Yes, each card requested changed the mass. Still more, each request may be served by a different dealer!
Careful here. You may get confused. Requesting a card using GET is like asking the dealer to show you the face of the top card. You get a representation, but the card stays in the deck. In real game, you want the card sent to you. In REST, you will post something (a "draw a card" request message) to the dealer, with the actual deck URL. The dealer will modify the deck, say removing the top card, and will redirect you to that card. Here, the mass of cards is a resource, the dealers are the servers.
Now, can you clearly see the difference of the states? Each request will change the resource state, but that is not the app state. Each client has its own state of the game (each player knows its hand), and that is the app state.
The actual server state may be dealer working or dealer in a break. So server state is actually another very different thing.
So, what is the REST constrain here? Well, if you program the dealer to know the hand of each client, you are violating REST, since then the server is actually controlling the state of the app for each user.
The idea is the client controls its app state, not the server. So, even when the client wins the hand, it will have to show it to the dealer, saying I won. And the dealer will give me the lollipop gift. And after that the dealer will wait again for another request.
Each request will contain all the information needed to be totally completed in one interaction. The sum of all interactions is your final goal, the application.
Also note the resource can be changed in the process, but that is not an app change.
Why is all that setup good? Well, you can have thousands of clients that it will not impact servers, since they do not keep track of the hands. You can add or remove dealers with no impact on the clients nor in the application states. You keep hidden the resource implementation. And you got then a nicely distributed game, scalable and simple.
Hope this helps!
I want to play blackjack. I have some friends and we all go to the online blackjack service.
Each one of us enters the application, which allows us to play BJ. The main screen requests our IDs. At the next minute, we all but one are ready to play (the one still not ready is because he forgot the ID number).
Let's see what happens there. We are, say, ten clients, each one is using the application, nine of which are ready to play while one is waiting for ID input. As you can see, there is one application, but each client has a different application state now. And each client knows its state clearly.
So, we left our friend finding his number and we started to request cards. We knew there was a dealer at the other end of the line, but we were not able to know if we were all playing with the same dealer!
Still, there was just one mass of cards. Yes, each card requested changed the mass. Still more, each request may be served by a different dealer!
Careful here. You may get confused. Requesting a card using GET is like asking the dealer to show you the face of the top card. You get a representation, but the card stays in the deck. In real game, you want the card sent to you. In REST, you will post something (a "draw a card" request message) to the dealer, with the actual deck URL. The dealer will modify the deck, say removing the top card, and will redirect you to that card. Here, the mass of cards is a resource, the dealers are the servers.
Now, can you clearly see the difference of the states? Each request will change the resource state, but that is not the app state. Each client has its own state of the game (each player knows its hand), and that is the app state.
The actual server state may be dealer working or dealer in a break. So server state is actually another very different thing.
So, what is the REST constrain here? Well, if you program the dealer to know the hand of each client, you are violating REST, since then the server is actually controlling the state of the app for each user.
The idea is the client controls its app state, not the server. So, even when the client wins the hand, it will have to show it to the dealer, saying I won. And the dealer will give me the lollipop gift. And after that the dealer will wait again for another request.
Each request will contain all the information needed to be totally completed in one interaction. The sum of all interactions is your final goal, the application.
Also note the resource can be changed in the process, but that is not an app change.
Why is all that setup good? Well, you can have thousands of clients that it will not impact servers, since they do not keep track of the hands. You can add or remove dealers with no impact on the clients nor in the application states. You keep hidden the resource implementation. And you got then a nicely distributed game, scalable and simple.
Hope this helps!
Subscribe to:
Posts (Atom)