This video gives a brief introduction to PowerShell. It provides a (hopefully) valuable set of advices and commands that can be used by every developers while performing their day 2 day development activities on windows with PowerShell.
I’m currently following the Pluralsight training Full Stack Node.js. The course was release more than a year ago and it is based on express.js 2.x.
I have chosen to build the course sample on express 3.x and because of this some features described on the course are no more applicable.
The purpose of this blog post is to actually demonstrate how to migrate properly one of the them: “The Flashing”.
Simply put, the flashing consist of displaying a status message once an operation executed on a post has completed.
The Flashing is particularly useful when using the Post/Redirect/Get Pattern. The all scenario can be described as:
- The user submit a form with a POST action
- The user form is process and a result status is generated such as “Error” or “Success”
- The Post action completes with a redirect that will tell the client browser to GET another page.
- The rendering of the get result should display the processing status
I found several questions on stackexchange explaining how to do that but none of them was clear enough for me to understand…
So I digged a bit and I decide to share my findings here.
The original solution was using app.dynamicHelpers which is not applicable anymore.
The migration document from express 2.x to 3.x just says replace with:
middleware and res.locals…
Fine… How am I suppose to do that? Well I believe the answer is incomplete.
You will use the middleware to add processing into the handling of your request and its response.
After all this is why the middleware is there for. Then in the middleware you will attach a function that can be used in the route handler.
This function will use res.locals to attach something to your response that can be used on the rendering of the view. This is incomplete…
or at least this how I understood it and it felt incomplete.
Our problem here is that we are using the Post/Redirect/Pattern and this means that all data attach to the res.locals vanish as soon as you do a redirect.
The redirect will instruct the browser to perform a GET and this will start the request from the processing of the HTTP request from the beginning.
There is no way to solve our issue just with a middleware and res.locals!!! We need a way to pass information from the POST and REDIRECT to the following GET.
The only to do this is via a cookie or in the session. But those will then need to be cleaned once the status message has been displayed.
This is how I have achieved this. First I have implemented the middleware and it looks like this:
The middleware will need to be added to your app express instance as usual.
Then from the route handler of the POST processing we have the following implementation.
This is basically where we set the status of the processing.
And finally we can use the status information that will have been attached to the res.locals from the jade template
I had recently a discussion with a younger developper in C# that was asking question about the usage of the yield keyword.
He was saying he never used and though it was useless. He then confessed me it didn’t really understood wath the keyword was exactly about.
I tryed to explain him what it does and this the material I would have used it if I had it at that time.
I will try with this post to explain what “yield” is all about with simple but concrete examples.
It should be used in a function that returns an instance that implement IEnumerable or and IEnumerable<> interfaces.
The function must return explicitely one onf those interfaces like the two following functions:
By returning the IEnumerable interfaces those functions become iteratable and can now be used directly from the foreach loop like:
What is the difference between those two functions and this one?
It might not be obvious at first sight as the result is identical but the execution flow is different.
Basically if you debug the program execution you will see the following for the returned list
- Enter the foreach loop
- Call the GetIntegers ONCE
- Write the first number
- Write the second number
- Write the third line
And you will see the following when using the yield return
- Enter the foreach loop
- Call the GetIntegers but leave at the first return
- Write the first number
- Call the GetIntegers but start at the second return and leave just after
- Write the second number
- Call the GetIntegers but start at the third return and leave just after
- Write the third line
That is all. It simply changes the execution flow and allow you to handle each element of the list one by one before the next element is called.
No it is not. You could have achieve the same result by having implemented yourself the iterator pattern using the interface IEnumerable and IEnumerator and building a dedicated class to handle this like the following code (for simplicity I will only implement IEnumerable but IEnumerable<> could have been implemented as well):
And then define a function:
Both of the code generated by the compiler will look very similar.
This can be confirmed by looking at the IL code generated by both of our implementation.
We can see that when using yield an extra class is generated for us that implements IEnumerable and IEnumerator (and their generic version).
The Iterable class we have written will look mostly the same (But for the generic versions that we have not implemented)
Basically using the yield will allow us to have the control over the way the items in our IEnumerable result items and their processing happens. And no magic behind.
It is simply an helper that will generate the code for you.
In one of my assignment I had to investigate different ways to publish utility libraries to different projects and development team. The first idea that came to my mind was to build a Nuget package and to configure an internal Nuget Feed where I could publish my package. This sound like a good idea and I was going to close the analysis phase and settle down for the implementation when someone came to me and asked a question about how I was going to manage the Security patch deployment. Let me clarify what is a security patch.
A Security patch is a patch that needs to be deploy to production no matter of the risk for the production application to gets into trouble. It is a patch that does not contain any API or interface change but contains only internal corrections. Those patch are not deployed in the scope of a particular application but are deployed on any machine a specific component is used. In my situation, as I’m only delivering Libraries, I need to be able to tell to the ops team “Please deploy this on all machine the library is in use”. And this is my major problem. I don’t know. My library get used through Nuget and only the client applications know the package they are using. I also cannot guarantee that a fix on one package with the publication of a new Nuget package version will be picked up right away by the client application development team and included in their next deployment.
What option do I have here? Nuget package do not cover this scenario by design. My first reaction was to challenge the requirement? What kind of library might require a security patch? Not so many. You know what they say: “Show me a dragon and then I will show you excalibur”. This did not convince. I had to find a specific way to deploy those security sensible libraries.
This when I started investigating GAC deployment. How do I achieve GAC deployment? Well I build my library and I make it available through an MSI. This MSI will register that library in the GAC. The MSI being deploy on machine as a unit of deployment, it can be tracked and inventoried by the OPS team. I can find the list of machine where I will have to deploy my security patch.
The GAC deployment provides me with the possibility to deploy a new version version of a library on a machine and make sure any client application using the old version of the component will pick up the new version right away. I tested this and this is how I achieved this:
I wrote two libraries with the following code:
Both of them where compiled and strongly signed with the same key.
I wrote a small client app that was using the library It looked like that:
I built those components on .NET framework 4.5 meaning I’m using the GAC of the .NET framework 4.0.
I deploy the version 126.96.36.199 to the GAC using the following .NET framework 4.0 gacutil command from a visual studio 2012 command dos prompt:
I could then use the following command to check the proper installation of my component in the GAC
And the result was:
Then I ran the client application application exe file and I could check in my output file the following line
Then I deployed the version 188.8.131.52 of the library using the same method as previously mentioned.
I then run the the following command to check the content of my GAC
This clearly shown the multiple version of my deployed library.
After running the client application I could see my client application was still using the version 184.108.40.206.
In order for my client application to use the version 220.127.116.11 of the library I have to deploy a policy file.
A policy file is a config file (XML) that gets compiled into a dll in order to be deployed to the gac.
This will tell the GAC to redirect all calls for a given version to a another version.
This is the content of my policy config file that I named RedirectPolicyFile.config.
I compiled it using the following command
Then i registered the policy “policy.1.0.CommonLibrary.dll” to the gac using the same command as usual
We can then run the client application and check the output file. It should contain the following line:
You have been security patched.
In one of the project I’m currently working on, I need to be able to call a function from an assembly that will be provided at run time.
One of the major requirement I have is to have a clear isolation of the call with a minimum of configuration.
The second requirement is to be able to provide a regular configuration file on my callee assembly in order for a lambda developer to implement a WCF call in that assembly using regular config files.
Meaning they should be able to write a simple .NET Assembly referencing other assembly and making use of a config file and all that should work.
There is no particular performance requirement. It is left to the developper of the callee assembly to manage this issue. It will be up to him to dispatch and manage threads if needed.
The easiest solution I found was to create a new app domain. To load the callee assembly in that new domain and to execute the call there. This gave me the isolation level I was needing.
My calling class looks like this
My ProxyComponent.Proxy class looks like this
The only shared assembly bet ween the callee and the caller is the assembly that contains the IScheduler interface.
This could have been avoided using DLR and Dynamic. I’ll try to work on this in a near future.