Tuesday, September 10, 2019

Types of Cryptograph Algorithms


  1. Symmetry
  2. Asymmetry
  3. Hash function

Symmetry Cryptograph


E.g One time pad/ Caeser Cipher /Enigma Machine they have same key to encrypt and decrypt

Asymmetry Cryptograph 

E.g One time pad/ Caeser Cipher /Enigma Machine they have same key to encrypt and decrypt



Encryption and decryption should follow this mechanism
  • Confusion- Message and Key 
  • Diffusion- Message and Cipher Text
Data Encryption Standard: DES vs AES: Advanced Encryption Standard

AES 256
Provides confusion

Cipher Block Chaining 
Provide Diffusion


Classes Of Cryptographic Hash Functions

There are several different classes of hash functions. Here are some of the most commonly used:
  • Secure Hashing Algorithm (SHA-2 and SHA-3)
  • RACE Integrity Primitives Evaluation Message Digest (RIPEMD)
  • Message Digest Algorithm 5 (MD5)
  • BLAKE2

An Intro To One-Way Hash Functions

Hash functions are often called one-way functions because, according to the properties listed above, they must not be reversible. If an attacker could easily reverse a hash function, it would be totally useless. Therefore, cryptography requires one-way hash functions.
The best way to demonstrate a one-way function is with a simple modular function, also called modular arithmetic or clock arithmetic. Modular functions are mathematical functions that, put simply, produce the remainder of a division problem.
So, for example, 10 mod 3 = 1. This is true because 10 divided by 3 is 3 with a remainder of 1. We ignore the number of times 3 goes into 10 (which is 3 in this case) and the only output is the remainder: 1.
Let’s use the equation X mod 5 = Y as our function. Here’s a table to help get the point across:
cryptographic hash function table
You can probably spot the pattern. There are only five possible outputs for this function. They rotate in this order to infinity.
This is significant because both the hash function and the output can be made public but no one will ever be able to learn your input. As long as you keep the number you chose to use as X a secret, it’s impossible for an attacker to figure it out.
Let’s say that your input is 27. This gives an output of 2. Now, imagine that you announce to the world that you’re using the hash function X mod 5 = Y and that your personal output is 2. Would anyone be able to guess your input?
Obviously not. There are literally an infinite number of possible inputs that you could have used to get a result of 2. For instance, your number could be 7, 52, 3492, or 23390787. Or, it could be any of the other infinite number of possible inputs.
The important point to understand here is that one-way hash functions are just that: one-way. They cannot be reversed.
When these same principles are applied to a much more sophisticated hash function, and much, much bigger numbers, it becomes impossible to determine the inputs. This is what makes a cryptographic hash function so secure and useful.

Regardless of the length of the input, the output will always be the same fixed length and it will always appear completely random. Play around with this tool to see for yourself.

This online tool allows you to generate the SHA256 hash of any string. SHA256 is designed by NSA, it's more reliable than SHA1.
Enter your text below:
Generate
Clear All
MD5
SHA1
SHA512
Password Generator
SHA256 Hash of your string:

Custom Iterators


Custom Iterators

By Bill Wagner

In this article, I’ll discuss custom iterators, predicates, and generic methods. You can combine these three techniques to create small building blocks that plug together in a variety of ways to solve everyday development problems.  Once you can recognize when to apply this pattern, you will be able to create a large number of reusable building blocks.
When we build software, we rarely work with one instance of a data type.  Instead, most of our work centers on collections of data: Lists, Arrays, Dictionaries, or other collections. As a result, so much of our code appears in a series of loops:
foreach ( Thing a in someContainer)
    doWorkWith(a);
That’s simple enough, and there’s little point in trying to get more reuse from that simple construct.  But most of our daily work isn’t really that simple. Sometimes the iteration creates a new collection, or a new sequence. Sometimes the iteration should only affect some of the elements in a collection, not all them. Sometimes the iteration will remove some elements from the collection.  With all the different variations, we’re back to copying code and modifying it.  That lowers our productivity. When you create custom iterators and predicates instead of copying and modifying loops, you decouple the common actions from the specific code. And, often, you will use the output from one iteration as the input to another.  By using custom iterators to perform the iteration, you can save memory and increase performance as well.
Let’s start with a simple sample, and modify that code to create a set of reusable custom iterators and predicates. Suppose you’ve been asked to print out a list of people you know from the New York City phone book. It is a contrived problem, but it exhibits the characteristics of many real-world problems. You need to process a large number of objects (the list of all entries in the New York City phonebook). You need to extract a subset of the properties of each object (the first and last names). Finally, you need to filter the input set to generate the output set (only the people you know).  Your first attempt might look like this:

// First attempt:
public List PeopleIKnowInNewYork()
{
    IEnumerable newYorkNumbers =
        PhoneBook.FindListFor(“New York”);

    List peopleIKnow = new List();

    foreach ( PhoneBookEntry ph in newYorkNumbers)
    {
        string name = string.Format(“{0} {1}”, ph.FirstName, ph.LastName);
        if ( RecognizePerson( name ) )
            peopleIKnow.Add(name);
    }

    return peopleIKnow;
}

This code does produce the proper output, but there’s a lot to criticize. You’ve loaded every name in the New York phone book into one list.  That’s very wasteful. Chances are you know a very small percentage of people in New York. It is a big place, after all.  However, by creating a local List of all the people in New York, you prevent the garbage collector from freeing any of those entries until the entire list is processed.  At best, that’s very wasteful. If you don’t require a very large memory configuration, your application probably fails. Also, from a design standpoint, it falls short of expectations.  Chances are that as this imaginary sample application grows, you will get other requests for similar, but not identical, features. For example, you may be asked to find everyone you called in the last six months, and print out the phone number you dialed. Right now, that produces another method that has almost the exact same contents, with one or two changes.
As a first step to creating more usable code, you can create a custom iterator that returns a sequence of names from a sequence of PhoneBookEntries.  That would look like this:

IEnumerable ConvertToNames(IEnumerable list)
{
    foreach ( PhoneBookEntry entry in list)
        yield return string.Format(“{0} {1}”, entry.FirstName, entry.LastName);
}

This additional method changes your PeopleIKnow method to this:

// Second attempt:
public List PeopleIKnowInNewYork()
{
    IEnumerable newYorkNumbers =
        PhoneBook.FindListFor(“New York”);

    List peopleIKnow = new List();

    foreach ( string name in CovertToNames(newYorkNumbers))
    {
        if ( RecognizePerson( name ) )
            peopleIKnow.Add(name);
    }

    return peopleIKnow;
}
It’s a little better. Now, anytime you get a new request that requires you to convert a set of phone entries to a set of strings, you’ve already got the method to do it. One more quick modification to the method saves you a lot of memory.  If you examine the application, you’ll almost certainly find that you never need the full list of names. You really only need to enumerate the list of names.  So, you can try this modification and see if everything still compiles:
public IEnumerable PeopleIKnowInNewYork()
That works, so you can change the PeopleIKnowInNewYork method to a custom enumerator method:
// Third attempt:
public IEnumerable PeopleIKnowInNewYork()
{
    IEnumerable newYorkNumbers =
        PhoneBook.FindListFor(“New York”);

    foreach ( string name in ConvertToNames(newYorkNumbers))
    {
        if ( RecognizePerson( name ) )
            yield return name;
    }
}
Let’s stop a minute and consider what you’ve accomplished, and how you can use these techniques in other use cases and other applications.  For this discussion, let’s assume you’ve changed PhoneBook.FindListFor() to be an enumerator method as well.
You started with a pipeline that looked like this:
  1. Create a list of every phone book entry in the New York phonebook
  2. Examine every entry in that list
  3. Create name from the phone book entry
  4. If the name is recognized, add it to the output list
By changing methods to enumerators, you’ve created a pipeline that looks like this:
  1. Read an entry from the phone book.
  2. Create a name for that entry
  3. If the name is recognized, return the name
  4. repeat

That’s a good start.  This set of changes took away much of the memory pressure that this method placed on the system. There may only be one PhoneBookEntry and one string representation of a name in memory at one time. Any PhoneBookEntry objects already processed are eligible for garbage collection. 
You can do better by introducing predicates, in the form of .NET delegates. The code you created is a little bit more reusable, but still suffers from being very specific to the operation at hand:  finding recognized names from the New York Phone book.  In order to make this code more reusable, you need to parameterize that specific portion of the algorithm. The problem, though, is that the specific portion of the algorithm is actually code.  Luckily, the .NET Framework and C# have a way to introduce code as a parameter to a method: a delegate.
Start with your ConvertToNames method. With that as a base, you can build what we want: a method that transforms an input sequence into an output sequence of a different type.  Here is the ConvertToNames method:
IEnumerable ConvertToNames(IEnumerable list)
{
    foreach ( PhoneBookEntry entry in list)
        yield return string.Format(“{0} {1}”, entry.FirstName, entry.LastName);
}

What you want is to pass the code ‘string.Format(…)” as a parameter to the method. So you change the signature of ConvertToNames like this:
// A generic method to transform one sequence into another:
delegate Tout Action(Tin element);
IEnumerable Transform(IEnumerable list, Action method)
{
    foreach( Tin entry in list)
        yield return method(entry);
}
This has some new syntax, but it’s really fairly simple.  The delegate definition defines the signature of any method that takes one input parameter and returns a single object of another type. Transform simple defines a pipeline that returns the output of the delegate for every object in the input sequence. It’s the same thing as ConvertToName, but it can be used for any input type, any output type, and any algorithm that transforms one type into another.
You’d call that method like this:
// Fourth attempt:
public IEnumerable PeopleIKnowInNewYork()
{
    IEnumerable newYorkNumbers =
        PhoneBook.FindListFor(“New York”);

    foreach ( string name in Transform(newYorkNumbers,
        delegate(PhoneEntry entry)
        {
            return string.Format(“{0} {1}”, entry.FirstName, entry.LastName);
        }))
    {
        if ( RecognizePerson( name ) )
            yield return name;
    }
}

There are a few new concepts here, so let’s go over it in detail. This code is a little easier to understand if you change the structure just a bit.  So here’s the fifth version:

// Fifth attempt:
public IEnumerable PeopleIKnowInNewYork()
{
    IEnumerable newYorkNumbers =
        PhoneBook.FindListFor(“New York”);

    IEnumerable names = Transform(newYorkNumbers,
        delegate(PhoneEntry entry)
        {
            return string.Format(“{0} {1}”, entry.FirstName, entry.LastName);
        });

    foreach (string name in names)
    {
        if ( RecognizePerson( name ) )
            yield return name;
    }
}
The expression defining names defines a new enumeration over all the names harvested from the phone book.  It defines the delegate method inline, making use of anonymous delegates.  Note that I’m only creating a single line delegate here. I’d recommend against creating complicated methods as anonymous delegates. But, when you need to wrap an existing function call in a delegate, this syntax is much simpler for other developers to follow.
There’s one last loop to convert to something more generic:  The loop that checks for names you know. This one can be refactored into a well-known generic pattern:
delegate bool Predicate(T inputValue);
IEnumerable Filter(IEnumerable list, Predicate condition)
{
    foreach (T item in list)
        if (condition(item))
            yield return item;
}
You should recognize this pattern by now. The Filter is just a pipeline that returns all members of a list that match the condition specified by the predicate.  One last look at that PeopleIKnowInNewYork method shows how you can use it:

// Fifth attempt:
public IEnumerable PeopleIKnowInNewYork()
{
    IEnumerable newYorkNumbers =
        PhoneBook.FindListFor(“New York”);

    IEnumerable names = Transform(newYorkNumbers,
        delegate(PhoneEntry entry)
        {
            return string.Format(“{0} {1}”, entry.FirstName, entry.LastName);
        });

    return Filter(names, delegate(string name)
        { return RecognizePerson(name);} );
}
The final version is quite a bit different than what you started with, and some may argue that it’s not that different.  But it depends on how you apply it. The final version is structured around much more powerful building blocks.  You built a generic method that filters a list of any type, as long as you supply the specific condition for a single item. You built a generic method that transforms a sequence of one type into a sequence of another type. You can leverage this same technique for other operations.  This method samples an input sequence, returning every Nth item:
IEnumerable NthItem(IEnumerable input, int sampleRate)
{
    int sample = 0;
    foreach (T aSample in input)
    {
        ++sample;
        if (sample % sampleRate == 0)
            yield return aSample;
    }
}

This method generates a sequence, based on some factory method you define:
delegate T Generator();
IEnumerable Generate(int number, Generator factory)
{
    for (int i = 0; i < number; i++)
        yield return factory();
}
And this method merges two sequences of the same type into a new type combining them:
delegate Tout MergeOne(Tin a, Tin b);
IEnumerable Merge(IEnumerable first,
    IEnumerable second, MergeOne factory)
{
    IEnumerator firstIter = first.GetEnumerator();
    IEnumerator secondIter = second.GetEnumerator();
    while (firstIter.MoveNext() && secondIter.MoveNext())
        yield return factory(firstIter.Current, secondIter.Current);
}
By separating the actions on a single object from the actions on a sequence of types, you created a set of more powerful building blocks to work with very large sets of sophisticated data types.

Summary

In this article, I showed you how custom iterators, delegates, and generic methods can be combined to create more powerful reusable building blocks for your applications.  By applying these techniques yourself, you’ll find many uses for these techniques, and you’ll be more ready for C# 3.0, where many of these techniques enjoy more embedded language support.

Monday, September 9, 2019

Cryptography Funcdamentals- One Time Pad vs Caesar Cipher

Encryption/ Decryption
Encipher/Decipher

Improvement on Caesar Cipher


Shift any number of space to shuffle the alphabet

One Time Pad




I recently saw The Spy Netflix by Sacha Baron 
It used Morse code. The topic is different but it shows in olden days how things were transmitted.

Thursday, August 29, 2019

Why Automation test is not worth investing?

It is a vicious cycle.

Here is the thing. Why automation test suite not worth investing? Most of us may debate but CIO don't have time to seek return of investment or audit the whole investment. It is just another brick in the wall.


This explains why?

  • We hire consultant to do the job. 
  • Consulting or contractor create Automation suite and automate it.
  • It keep adding more and more functionality.
  • It create a big giant pack.


  1. Problem 1. A big suite is not enough to run on time. Consume a whole day because automation tester doesn't understand the architecture. They have less idea about performance or coding on daily basis.
  2. Problem 2. If it automation suite yield no results on time. It is futile
  3. Problem 3. No one has time to look into this. No one wants to do parallel programming or identify the sweet and simple solution to meet objective. Per say think about regression that runs and yield results in just no time. Add to release pipeline.
  4. Problem 4. Agile is fragile. So is our functional domain. Changes are radical. Automation Suite is not actual production copy or real time match. It fails the whole objective.
  5. Problem 5. No one looks into it. No one question. If question it is taken care for given point of time and later the same old story. 


Bottom line is it automation test suite should be devops and ingrain in the process.

Tuesday, August 27, 2019

Chaining the C# null coalescing ?? Operator

Simple example

string partner = Request.QueryString["GoogleId"] ?? 
                 Request.QueryString["PartnerId"] ?? 
                 Request.QueryString["UserKey"] ?? 
                 string.Empty;

How to make internal class assembly exposed to specific assembly use InternalsVisibleTo

Important thing to note, internal assembly if stronly typed then you have to mention the public key as well.

Assembly A- A.dll  have internal class = I
Assembly B- B.dll can access internal class .

In Assembly A , .net go to property file and update this line

assembly: InternalsVisibleTo("A.I")

Performance Optimization:SItecore 9.2 Content Delivery Server

After migrating website from sitecore 8.2.1 to sitecore 9.2 I noticed my application was extremely slow. The only place I could search for an issues , is sitecore log app_data/logs . The initial reaction after looking at it was seeing whole lot of different log files.

The first reaction was to look at this logs closely. I found there were lot of exception around Xdb to Xconnect which is kind of real problem. Then later I realised Redis cache is playing up. Then I found there is no need of Email EXM manager as I'm not using it. I even created config patch to remove unnecessary log files which is not adding any value. One of the kind was performance log counter that sits under app_data diagnostic folder. Health Monitor an all.
https://doc.sitecore.com/developers/91/platform-administration-and-architecture/en/content-delivery--cd-.html







Thursday, August 22, 2019

Glass Mapper V5 for sitecore 9.2 breaking change SitecoreChildren Islazy is not lazy any more

    public partial interface ICategoryFolderEntity
    {
        /// 
        /// Gets list of category in this folder
        /// 
        [SitecoreChildren()]
        IEnumerable CategoryList { get; }
    }
public partial class CategoryFolderEntity : ICategoryFolderEntity
    {
        /// 
        /// Gets list of category in this folder
        /// 
        [SitecoreChildren]
        public virtual IEnumerable CategoryList { get; set; }
    }

No need of
[SitecoreChildren(IsLazy = false)]
Just  [SitecoreChildren]
And virtual IEnumerable CategoryList
Remember Virtual

Reference:

https://sitecore.stackexchange.com/questions/16530/after-upgrading-to-glass-mapper-5-unable-to-globally-enable-lazy-loading

Tuesday, August 20, 2019

Sitecore 8.2 Migration to sitecore 9.2 DLL Hell

While we migrate sitecore mvc .net solution, one need to be very careful with following set of dlls that might conflict with version and you end up spending time to do patching the version in web.config or in app.config files.



Here is the key:-


Sitecore 8.2 per say have .Net framework 4.5.2 and all the dependency dll must match otherwise you might end up seeing these kind of exception every time. Indeed you're deep down the rabbit hole.

Before you start, take note of all version of below dlls in your vanilla sitecore installation /bin folder sitecore 9.2 -.Net framework 4.7.1 / 4.7.2

Even sitecore documentation doesn't give you the summary for the version of dll it supports, the way  they have compatibility tables for all platform dependencies. Similarly it is good to have those dll version dependencies with which they have constructed the sitecore framework.

Go in this order and ensure your webapps and project library have consistent dll across:-

Sitecore Intrinsic Assembly Reference


  • Sitecore.Kernel
  • Sitecore.ContentSearch
  • Sitecore.ContentSearch.Linq
  • Sitecore.Mvc
  • Sitecore.Client
  • Sitecore.Analytics
  • Sitecore.Analytics.Model


.Net Framework Assembly Reference


  • System.Web.Mvc
  • System.Web.Http


The King's Newtonsoft.Json


  • Newtonsoft.Json (This one in specific would be challenge in terms of mvc vs Owin connectors). Slight version deviation would take much of your time.)


DI - Reference


  • Microsoft.Extensions.DependencyInjections.Abstractions

Identity Owin


  • Owin
  • Microsoft.Owin
Solr

  • SolrNet
If the version mismatch is handled correctly, the first hurdle of migration is through. Trust me most of the blogs out there talks about sitecore 9.2 installation and setup. They are good but no one has given the actual migration of .net solution that will haunt you when you get started. 

Once you are through with this DLL Hell! You then probably have to focus and deal with app_config patch that is specific to your application domain. There is a change which is more related to Xconnect vs XDB. 

In fact sitecore migration is really a painful task. Also the worst part is sitecore is going to end support for sitecore 8.2 soon. As an application architect one has to plan such migration meticulously otherwise your enterprise application which was developed with sitecore 8.x hoo haa! might become a legacy in no time.

Mainstream support for Sitecore 8.2 is going to end soon :disappointed: 

https://kb.sitecore.net/articles/641167

https://kb.sitecore.net/articles/087164

As an architect, it is no point to start any new project on .Net framework based cms as microsoft has put an end to .Net framework. They lately said .Net framework 4.8 is the last in its journey.

“The .NET Framework is on it’s last release — there will not be another one after 4.8”

https://devblogs.microsoft.com/dotnet/net-core-is-the-future-of-net/

https://betanews.com/2019/05/07/future-of-dotnet/

https://medium.com/@andy.watt83/the-net-framework-is-done-8aec3bbae12d



Sunday, August 18, 2019

Sitecore 8.2 to sitecore 9.2 migration: Not an easy task

Having said that sitecore 9.2 changes are far more different from sitecore 8.2 . If you look at the Kb site and even sitecore 9.2 xconnect, Identity server and so on. It comes with lot of changes which makes CD .net solution migration unbearable.

The main pain areas that would need tackle. I'm not talking about mongo db /Xdb or Xconnect , cms migration stuff. I'm talking about sitecore mvc .net solution what needs to be replatform. It  is huge effort. I wondered sometime , is it worth doing sitecore 8.2 to sitecore 9.2 migration or shall I wait for sitecore to reflatform its whole technology stack into .NET Core.


Pain areas:-


  • App_Config
  • Reference dll, mvc, kernel, dependency injection and so on.
  • Glass mapper version, if your are using it
  • TDS version
  • T4MVC template
  • MVC .net version
  • . Net Framework 4.5.2 to 4.7.2

Sitecore MVPs out there has done great job promoting sitecore 9 but coming to replatform whole existing site into sitecore 9 is huge task.

Then there are heaps of other minor stack of application architect that you might have to replatform or refactor.