Microservices Assessment Framework


Mohit is an experienced enterprise architect and blogger. He has consulted various organizations and trained multiple teams to enable them to successfully adopt and improve Microservices architecture.

Based on his experience,  Mohit is working on a Microservices assessment framework for following three objectives.

  1. Readiness – Access whether your organization is ready to adopt Microservices?
  2. Fitness – Access whether Microservices is good fit for your organization?
  3. Review– Evaluate your (Microservices) Architecture and identity area of improvements.

 

blog

Proposed framework access Organization, processes and  base architecture.  You may find various questionnaires accessing following items.

  1. Business Drivers – Determine whether you have clear and valid business drivers for MSA.
  2. Development Velocity – Determine whether you can take benefits from MSA.
  3. Base Architecture – Determine whatever base architeure has all the required
  4. Infrastructure – Determine whether your organization has developer and MSA friendly infrastructure.
  5. Organization Structure – Determine whether you have correct organization structure required for MSA
  6. Processes – Determine whether you have correct organization processes required for MSA
  7. Individual Service Design – Determine capability of each service design.

Stay tuned for More Information. Please contact us for learning more.

 

 

Advertisements

Why Is Swagger JSON Better Than Swagger Java Client?


The Swagger Java-Based Client Using Java Annotations on the Controller layer

Pros and Cons

  • It’s the old way of creating web-based REST API documents through the Swagger Java library.
  • It’s easy for Java developers to code.
  • All API description of endpoints will be added in the Java annotations parameters.
  • Swagger API dependency has to be added to the Maven configuration file POM.xml.
  • It creates overhead on the performance because of extra processing time for creating Swagger GUI files (CSS, HTML, JS etc). Also, parsing the annotation logic on the controller classes creates overhead on the performance, as well. It makes the build a little heavy to deploy on microservices, where build size should be smaller.
  • The code looks dirty because the extra code has to be added to the Spring MVC Controller classes through the Spring annotations. Sometimes, if the description of the API contract is too long, then it makes code unreadable and maintainable.
  • Any change in an API contract requires Java to build changes and re-deployment, even if it’s only simple text changes, like API definition text.
  • The biggest challenge is to share with the clients/QA/BA teams before the actual development and to make frequent amendments. The service consumers may change their requirements frequently. Then, it’s very difficult to make these changes in code and create the Swagger GUI HTML pages by redeploying and sharing the updated Swagger dashboard on the actual deployed dev/QA env.

2. Swagger JSON File Can be Written Separately and Provide Browser-Based GUI

Pros and Cons

  • In this latest approach, all of the above challenges with Java-based client solution have been solved.
  • The developer initially creates a JSON file, shares, and agrees with the service consumer and stakeholders. They will get signed off after many amendments —no code change and re-deployment are required.
  • The code will be cleaner, readable, and maintainable.
  • There is no extra overhead for file creation and processing, performance is better, and the code is more lightweight for microservices, etc.
  • There is no code dependency for any API contract changes.
  • Swagger JSON file resides in the project binaries (inside src/main/resources/swagger_api_doc.json). We can deploy Swagger on one server and can switch to an environment like this.

Note

You can copy and paste swagger_api_doc.json JSON file content on https://editor.swagger.io/. It will help you modify content and create an HTML page like the following.  Swagger GUI will provide the web-based interface like Postman.

Blockchain in Insurance Claims


via Blockchain

Blockchain is a distributed ledger initially used by Bitcoin cryptocurrency and eventually by many banking organizations to record transactions between parties with high security. It is start of Blockchain arena and it is anticipated to have a long sustainability and acceptability in various industries.

One of the biggest use cases in Insurance Industry is adoption of blockchain in claim processing. Insurance contracts involve various parties as agents, brokers, repair shops and third party administrators involving manual work and duplication at various stages of value chain. Using blockchain, verification of transactions will be done without any human intervention and making it completely automated process at various stages.

Benefits for using blockchain in claim processing –

  1. The distributed ledger allows various parties to update the information securely like updating claim forms, evidence, police report etc helping in reduction of loss adjusted (LAE) expenses.
  2. Fraud Detection – As blockchain maintains a ledger of multiple parties, it has ability to eliminate any errors and frauds. Blockchain technology using the high computing power authenticates the customers, policies and transactions.
  3. Payments – Claim payments can be made without any need of intermediary authority for transaction validations which helps in reducing the overall operational cost of claims processing.
  4. As this is highly secured transactions, multi review process will be eliminated resulting into speedy claim processing.

Logback is not writing specific log file Solution


Logback is not writing specific log file Solution
Logback Configuration:
<property name="DEV_HOME" value="/root/apps/logs" />
<appender name="FILE-AUDIT"     class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${DEV_HOME}/myapp.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <!-- rollover daily -->
        <fileNamePattern>${DEV_HOME}/archived/myapp.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
        <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <!-- or whenever the file size reaches the max -->
            <maxFileSize>${rolling.file.max.size}</maxFileSize>
        </timeBasedFileNamingAndTriggeringPolicy>
        <maxHistory>${rolling.file.max.history}</maxHistory>
    </rollingPolicy>
    <encoder>
        <pattern>${rolling.file.encoder.pattern}</pattern>
    </encoder>
</appender>

Issue is related to conflict between log4j and Logback, when we migrate from old log4j to Logback. You need to exclude lo4j dependencies from existing dependencies by running this command:

$ mvn dependency:tree


= >Add these jars only for logging using SLF4j and Logback

    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-api</artifactId>
        <version>${version.slf4j}</version>
    </dependency>

    <dependency>
        <groupId>ch.qos.logback</groupId>
        <artifactId>logback-classic</artifactId>
        <version>${version.logback}</version>
        <scope>runtime</scope>
    </dependency>

    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>log4j-over-slf4j</artifactId>
        <version>${version.slf4j}</version>
        <scope>runtime</scope>
    </dependency>

Posted detail issue on StackOverflow:

http://stackoverflow.com/questions/35813027/logback-is-not-writing-specific-log-file-on-the-linux-server 

Crawl and Index….. Nutch / elasticSearch – Partners in the making


Hi

In the internet era, there is an old tech saying – “Content is King”  (inspired by old Jungle saying from Phantom.. 🙂 )

One of the common challenges in content management system is to extract the latest information.  In the WWW world, it is commonly known as crawling.  The king of the crawler world is Apache nutch.

elasticsearch (no more just the new kid in town) has already established itself as one of the top search platforms.  It is only natural that companies are looking at using the both platforms together to achieve a better content management system specifically acquire, analyze, publish, search  phase.

Here’s a quick and dirty guide to get them up and running quickly.

1. Download nutch
2. set NUTCH_HOME
NUTCH_HOME=/Users/madheshr/tools/apache-nutch-2.2.1
export NUTCH_HOME
3. Clean build
ant clean
ant
4. Verify new local deploy created under NUTCH_HOME/rutime
/Users/madheshr/tools/apache-nutch-2.2.1/runtime/local
5. Under bin sudirectory of local, create a new directory called urls
6. In urls create a new file called nutch.txt. Edit the file to add URLs to crawl
7. Enable crawler in conf/nutch-site.xml by adding the below lines within configuration tags
<name>http.agent.name</name>
<value>My Nutch Spider</value>
8. Note the value and enter the same in conf/nutch-default.xml as the
value for <name>http.agent.name</name>
9. Test by running the below command in local/bin

nutch crawl urls -dir /tmp -depth 2
Integrate Nutch and ES
1. Activate elasticsearch indexer plugin
Edit conf/nutch-site.xml

<property>
<name>plugin.includes</name>
<value>protocol-http|urlfilter-regex|parse-(html|tika)|index-(basic|anchor)|indexer-elastic|scoring-opic|urlnormalizer-(pass|regex|basic)</value>
<description>Regular expression naming plugin directory names to
include. Any plugin not matching this expression is excluded.
In any case you need at least include the nutch-extensionpoints plugin. By
default Nutch includes crawling just HTML and plain text via HTTP,
and basic indexing and search plugins. In order to use HTTPS please enable
protocol-httpclient, but be aware of possible intermittent problems with the
underlying commons-httpclient library.
</description>

2. Verify and add ES specific properties to nutch-site.xlm

<!– Elasticsearch properties –>

<property>
<name>elastic.host</name>
<value>localhost</value>
<description>The hostname to send documents to using TransportClient. Either host
and port must be defined or cluster.</description>
</property>

<property>
<name>elastic.port</name>
<value>9300</value>
<description>
</description>
</property>

<property>
<name>elastic.cluster</name>
<value>elasticsearch</value>
<description>The cluster name to discover. Either host and potr must be defined
or cluster.</description>
</property>

<property>
<name>elastic.index</name>
<value>nutch</value>
<description>Default index to send documents to.</description>
</property>

<property>
<name>elastic.max.bulk.docs</name>
<value>250</value>
<description>Maximum size of the bulk in number of documents.</description>
</property>

<property>
<name>elastic.max.bulk.size</name>
<value>2500500</value>
<description>Maximum size of the bulk in bytes.</description>
</property>

3. Create a new index in ES if it is not there already
<value>nutch</value>

curl -XPUT ‘http://localhost:9200/nutch/&#8217;