Skip to Content
Technical Articles
Author's profile photo Anton Efremov

Building a Hyperledger Fabric consortium based on Azure Kubernetes Service (AKS) template

In the previous blog we have instantiated a bare Hyperledger Fabric consortium instance based on Azure AKS template.

Now let’s install and instantiate actual blockchain code in order to form the consortium and try to execute that code. In order to complete this, please follow the steps below:

 

Step 1. We are going to perform a lot of config activities in the shell, so first we need to launch the Azure Cloud Shell. If you’re launching it for the first time, you’ll be asked to create a new storage account. Alternatively, you can choose an existing storage account in order to save your shell environment settings.

AZ%20Bash%20Cloud%20Shell

AZ Bash Cloud Shell

Step 2. Now we need to communicate with the template that we have already instantiated. The MS team have prepared a bunch of scripts for this purpose. Those scripts (AZ HLF Tools) will be helping us communicate with created items and perform typical admin actions on a blockchain network such as chaincode installation and instantiation. You can find the full package of tools including all the scripts that I mentioned here: https://github.com/Azure/Hyperledger-Fabric-on-Azure-Kubernetes-Service

In order to pull and set up a local copy of the tools, we need to run the following terminal command:

curl https://raw.githubusercontent.com/Azure/Hyperledger-Fabric-on-Azure-Kubernetes-Service/master/azhlfToolSetup.sh | bash

As a result, you should see an output similar to the one below and, more importantly, you should see and navigate into the ‘azhlfTool’ catalogue:

azhlfTool%20installation

Cloning azhlfTool

Step 3. Let’s now install the azhlfTool by running the commands below:

$ cd azhlfTool
$ npm install
$ npm run setup

Step 4. We’re ready to setup environment variables. It would be a good idea to save them into your local file. On the screen captures below I create a text file named ‘my_config’ and edit it using built-in text editor directly in the cloud shell.

Saving%20env%20vars%20into%20a%20config%20file

Saving env vars into a config file

$ touch my_config
$ code my_config
$ source my_config

Orderer Environment Variables

ORDERER_ORG_SUBSCRIPTION="Your Subscription"
ORDERER_ORG_RESOURCE_GROUP="RG_HL_Blog_O"
ORDERER_ORG_NAME="blogOrgO"
ORDERER_ADMIN_IDENTITY="admin.$ORDERER_ORG_NAME"
CHANNEL_NAME="azbloghlfchannel"

Peer Environment Variables

PEER_ORG_SUBSCRIPTION="Your Subscription"
PEER_ORG_RESOURCE_GROUP="RG_HL_Blog_P"
PEER_ORG_NAME="blogOrgP"
PEER_ADMIN_IDENTITY="admin.$PEER_ORG_NAME"

Storage Environment Variable

STORAGE_SUBSCRIPTION="Your Subscription"
STORAGE_RESOURCE_GROUP="RG_HL_Blog_O"
STORAGE_ACCOUNT="hlfbloge4264"
STORAGE_LOCATION="australiaeast"
STORAGE_FILE_SHARE="hlfbloge42649ccc"

Step 5. After setting up storage environment variables, we now need to create a storage account. If you have one already created, then skip the last step below. Otherwise, create a new storage account with the following commands:

az account set --subscription $STORAGE_SUBSCRIPTION
az group create -l $STORAGE_LOCATION -n $STORAGE_RESOURCE_GROUP

Storage%20Resource%20Group%20creation

Storage Resource Group creation

az storage account create -n $STORAGE_ACCOUNT -g $STORAGE_RESOURCE_GROUP -l $STORAGE_LOCATION --sku Standard_LRS

Storage%20Account%20creation

Storage Account creation

Step 6. Now we should generate a storage key as per below instructions:

STORAGE_KEY=$(az storage account keys list --resource-group $STORAGE_RESOURCE_GROUP  --account-name $STORAGE_ACCOUNT --query "[0].value" | tr -d '"')

az storage share create  --account-name $STORAGE_ACCOUNT  --account-key $STORAGE_KEY  --name $STORAGE_FILE_SHARE

Storage%20Key%20creation

Storage Key/Share creation

Step 7. Now we need to setup Azure file share connection string:

STORAGE_KEY=$(az storage account keys list --resource-group $STORAGE_RESOURCE_GROUP  --account-name $STORAGE_ACCOUNT --query "[0].value" | tr -d '"')

SAS_TOKEN=$(az storage account generate-sas --account-key $STORAGE_KEY --account-name $STORAGE_ACCOUNT --expiry `date -u -d "1 day" '+%Y-%m-%dT%H:%MZ'` --https-only --permissions lruwd --resource-types sco --services f | tr -d '"')

AZURE_FILE_CONNECTION_STRING=https://$STORAGE_ACCOUNT.file.core.windows.net/$STORAGE_FILE_SHARE?$SAS_TOKEN

Setting%20connection%20string%20vars

Setting connection string vars

Step 8. Now we execute below commands to fetch organization’s connection profile, admin user identity, and MSP from Azure Kubernetes Cluster and store these identities in client application local store.

For orderer org:

./azhlf adminProfile import fromAzure -o $ORDERER_ORG_NAME -g $ORDERER_ORG_RESOURCE_GROUP -s $ORDERER_ORG_SUBSCRIPTION
./azhlf connectionProfile import fromAzure -g $ORDERER_ORG_RESOURCE_GROUP -s $ORDERER_ORG_SUBSCRIPTION -o $ORDERER_ORG_NAME
./azhlf msp import fromAzure -g $ORDERER_ORG_RESOURCE_GROUP -s $ORDERER_ORG_SUBSCRIPTION -o $ORDERER_ORG_NAME

Getting%20orderer%20org%20connection%20profile

Getting orderer org connection profile

For peer org:

./azhlf adminProfile import fromAzure -g $PEER_ORG_RESOURCE_GROUP -s $PEER_ORG_SUBSCRIPTION -o $PEER_ORG_NAME
./azhlf connectionProfile import fromAzure -g $PEER_ORG_RESOURCE_GROUP -s $PEER_ORG_SUBSCRIPTION -o $PEER_ORG_NAME
./azhlf msp import fromAzure -g $PEER_ORG_RESOURCE_GROUP -s $PEER_ORG_SUBSCRIPTION -o $PEER_ORG_NAME

Getting%20peer%20org%20connection%20profile

Getting peer org connection profile

Step 9. Now we finally starting to create a channel with the command below, which creates a new channel:

./azhlf channel create -c $CHANNEL_NAME -u $ORDERER_ADMIN_IDENTITY -o $ORDERER_ORG_NAME

Creating%20a%20channel

Creating a channel

Step 10. We need to execute the below commands in the following order to add a peer organization in a channel and consortium.

1. From peer organization client, upload peer organization MSP on Azure Storage:

./azhlf msp export toAzureStorage -f  $AZURE_FILE_CONNECTION_STRING -o $PEER_ORG_NAME

2. From orderer organization client, download peer organization MSP from Azure Storage and then issue command to add peer organization in channel/consortium:

./azhlf msp import fromAzureStorage -o $PEER_ORG_NAME -f $AZURE_FILE_CONNECTION_STRING
./azhlf channel join -c  $CHANNEL_NAME -o $ORDERER_ORG_NAME -u $ORDERER_ADMIN_IDENTITY -p $PEER_ORG_NAME
./azhlf consortium join -o $ORDERER_ORG_NAME -u $ORDERER_ADMIN_IDENTITY -p $PEER_ORG_NAME

3. From orderer organization client, upload orderer connection profile on Azure Storage so that peer organization can connect to orderer nodes using this connection profile:

./azhlf connectionProfile  export toAzureStorage -o $ORDERER_ORG_NAME -f $AZURE_FILE_CONNECTION_STRING

4. From peer organization client, download orderer connection profile from Azure Storage and then issue command to add peer nodes in the channel:

./azhlf connectionProfile  import fromAzureStorage -o $ORDERER_ORG_NAME -f $AZURE_FILE_CONNECTION_STRING
./azhlf channel joinPeerNodes -o $PEER_ORG_NAME  -u $PEER_ADMIN_IDENTITY -c $CHANNEL_NAME --ordererOrg $ORDERER_ORG_NAME

Building%20consortium

Building consortium

Step 11. Let’s set anchor peers (that’s a peer node on a channel that all other peers can discover and communicate with):

./azhlf channel setAnchorPeers -c $CHANNEL_NAME -p peer1 -o $PEER_ORG_NAME -u $PEER_ADMIN_IDENTITY --ordererOrg $ORDERER_ORG_NAME

Setting%20an%20anchor%20peer

Setting an anchor peer

Step 12. We are coming closer to instantiation of the sample chaincode provided in the azhlfTool package. Let’s firstly set the chaincode specific environment variables as per below:

ORGNAME=$PEER_ORG_NAME
USER_IDENTITY="admin.$ORGNAME"
CC_NAME=demo_cc
CC_VERSION=1
CC_LANG=golang
CC_PATH= /home/<YOUR_AZURE_USER_NAME>/azhlfTool/samples/chaincode/src/chaincode_example02/go

Setting%20env%20vars%20to%20instantiate%20chaincode

Setting env vars to instantiate chaincode

Step 13. We are ready to install the chaincode:

./azhlf chaincode install -o $ORGNAME -u $USER_IDENTITY -n $CC_NAME -p $CC_PATH -l $CC_LANG -v $CC_VERSION

Instantiating%20chaincode

Instantiating chaincode

Step 14. Once the chaincode is installed, it is ready to be instantiated with input parameters. We are passing four parameters according to what the Init method is expecting:

./azhlf chaincode instantiate -o $ORGNAME -u $USER_IDENTITY -n $CC_NAME -v $CC_VERSION -c $CHANNEL_NAME -f init --args "a" "2000" "b" "1000"

Calling%20Init

Calling Init

Step 15. Now we are ready to call the chaincode and see if it really works!

We can invoke and query the respective chaincode methods in order to initiate assets transfer and check that the balance is up to date after the operation:

./azhlf chaincode invoke -o $ORGNAME -u $USER_IDENTITY -n $CC_NAME -c $CHANNEL_NAME -f invoke -a "a" "b" "10"

./azhlf chaincode query -o $ORGNAME -p peer1 -u $USER_IDENTITY -n $CC_NAME -c $CHANNEL_NAME -f query -a "a"

Calling%20the%20instance%20methods

Calling the instance methods

We can see that the deployed sample code is working in the fully configured Azure environment.

In summary, in this blog we’ve configured a Hyperledger Fabric consortium with Azure tools.

Now we are ready to expose it to the outside world through an API server. As you remember, our main goal is to integrate a Hyperledger Fabric template instance deployed on Azure with SAP Business Technology Platform solutions. We will continue reaching this goal and building our project further, so watch out for upcoming blogs. See you soon!

Assigned Tags

      2 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Anton Efremov
      Anton Efremov
      Blog Post Author

      Hi Gregory,

      Thank you for your comment. The current version available is 1.4.10. You can find more information and useful links here.

      I've re-checked all my links and they seem to be correct. Could you please confirm that a particular link doesn't work for you?

      Cheers,
      Anton

      Author's profile photo Former Member
      Former Member

      Hi Anton,

      Thank you for the fabric version confirmation and i think this github link provides a little more detail behind it: https://github.com/hyperledger/fabric/releases/tag/v1.4.10

      for some reason the link, and all of the 'previous' blog was unavailable yesterday, but it's all good now.

      in any event, seems like SAP will stay closer, in terms of hyperscaling, to Azure than to GCP or AWS. maybe that was already written into the Embrace or their fabric versions are getting 'stale' but it is still interesting timing since the 'neo'-like fabric service (v.1.4.4, i think) is expiring this month and i'm sorry to see it 'go'.

      can't wait to see it in a HANA.

      Cheers,

      Greg