When using mule, it is very common to set and access variables, sessions variable and property along our flow. What are the differences between :-
a) variable - data that exist and last from start to end of a flow unless over-written. Accessed using #[flowVars]
c) session - a lasting location for storing values. Accessed by using #[sessionVars]
d) property - are message header information
e) message payload - is the mule message sent to user and move from flow to flow. It could made up of message inflow (accessed using #[message.inboundProperties] and message outflow #[message.outboundProperties].
f) message events
The best way to test this out is to create a flow that makes use of these basic mule construct.
Let's start off with variable. From the flow below we grab input from http and save it to a variable. Then we create a choice flow to see if the variable is "reece". If yes, branch up and set result to 111, else 222.
CORS is used to control access to a remote resource, for example api.foo.com. If we hosted a webpage on site www.foo.com, we can configure remote resource, api.foo.com and tell it should entertain request coming from www.foo.com.
If you make a request from "evil.com" to api.foo.com, you will not be able to do so. Because we never really configure that to happen.
So we have,
www.foo.com --> making GET request to --> api.foo.com.
if www.foo.com is allowed requested to the site, we will get some response that look like this.
Request from api.foo.com
=> OPTIONS https://api.foo.com/products
- HEADERS -
Response from api.foo.com
<= HTTP/1.1 204 No Content
- RESPONSE HEADERS -
Access-Control-Allow-Methods: GET, POST, OPTIONS
From here, we can see that "Access-Control-Allow…
Pretty confusing to me at first, as i would have thought mule Runtime host mule API Gateway. While that's definitely not the case, API Gateway connects with API Manager to enforce and apply policies / settings like throttling, security, CORS to your back end services. These enforcement are based on sa specific apps.
Mule Runtime is where all your application gets hosted and run. It takes incoming request, runs the specific flows and returns results.
In this example, we're going to create the simplest Mule app flow. Our flow basically consist of a Http and a Groovy component which looks like this :-
Using Groovy is optional and entirely up to developers to choose. If you're using it for simple and not so complicated task, then it is fine. Otherwise, Java gives you ability to debug through the code.
Groovy does give you direct access to message which you otherwise need to call getMessage() - [if you are to use java component ].
And here is our script looks like :-
User submits are request like this, http://localhost:3004/?username='jeremy'.
If it is jeremy, great, if not it returns invalid user.
For beginners, it's pretty hard to know what message property and types that are available.
Perhaps picture below would give a better way for newbies to work with mule in the future.
In this example, we're going to create the simplest Mule app flow. Our flow basically consist of a Http and a Java component which looks like this :-
User basically connects to something like this :- http://localhost:3003/?username=jeremy.
Query parameter called ussername get pass into java's class and returns "jeremy" or "unknow user" depending on the string passed.
Here is what our java class looks like :-
From the code above, our java class implements mule class callable and we attempted to extract username parameter from 'http.query.params' which is a Map object type. With this, we proceed to get our value by calling get Map's method
This happens alot in maven development. Somehow your repository has lost your .jar files and all you get is .lastupdate. To force it to redownload all these dependencies, you need to run the following command :-
One fine day i came across this article. Curiously wanted to see if this works, i found a way to quickly setup your machine to run it using docker. Yeap you can setup and run it in 30 minutes time.
All you have to do is install docker and run the following command
docker pull continuumio/anaconda
docker run -i -t continuumio/anaconda /bin/bash
This install anaconda with python 2.7. Don't worry about this version. Its all good plus xgboost works only on this version.
Additional library you might need to install are
conda install -c bioconda xgboost=0.6a2
conda install pyqt=4
After that, just run python and you can run all the command given in the article.
Let's say we are trying to use xgboost to make prediction about our data and here is a sample data that we're going to be using :-
Some terminology before moving on. R uses the term label to say, this is our expected output when we're building our model. Yes, it is really confusing. A label to becomes final output of our predictions.
Basically what we're trying to find is relation between smoking and high sugar intake will lead to a person having disease. These are fake data of course. There are people who smokes and eat as much choc as they like, they still look sharp. (not me tho)
First we will create these data using R. Code example shows we're loading some libraries and then create a data frame called 'a'. Next it convert 'a' into a data table 'd'.
if (!require('vcd')) install.packages('vcd')
a = data.frame(id=c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17, 18, 19…
Regardless of what, that offset is to say "hey, leave some gaps/space" for me. For example, say we have a grid of col-sm-10 and i specify col-sm-offset-2, that gives me something like a full grid with 2 2 column is reserved for spacing.
This is a good way to see this would be looking at example below :-
Was abit confusing, initially but I guess offset would have been ok.
I found myself looking at a long list of commits which i need to remove. So i fire up my SourceTree and zoom in on list of commits that i need to undo. Right click on it and choose reverse commit. All the check-ins for that commit is gone.