I made a post regarding “XI: the future of neural networks (Part -1)”. href=”XI: The future of neural networks (Part – 1)” title=”XI: The future of neural networks (Part – 1)“>XI: The future of neural networks (Part – 1) I am very happy to receive some mails and I am also happy to post the complete thread between me and Sudhir as a blog with his permission so that the people in SDN Community of similar interest will have a bigger picture of my blog and Sudhir’s valuable analysis and comments. I am posting the mail thread as a blog because it contains vital information and analysis by Sudhir and reply designs which can be further analyzed by others in the SDN community of similar interest. From: Sudhir_Porumamilla (https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=i6tqtvzjoi8%3d) “+trg_in[0]+” “+trg_in[1]+” = “); trg_hidden[0]=weight_hidden[0][0]*trg_in[0]+weight_hidden[0][1]*trg_in[1]; trg_hidden[1]=weight_hidden[1][0]*trg_in[0]+weight_hidden[1][1]*trg_in[1]; trg_hidden[0]=1.0F/(1.0F+(float)Math.exp(-trg_hidden[0])); trg_hidden[1]=1.0F/(1.0F+(float)Math.exp(-trg_hidden[1])); trg_output=weight_output[0]*trg_hidden[0]+weight_output[1]*trg_hidden[1]; trg_output=1.0F/(1.0F+(float)Math.exp(-trg_output)); if(ctr%(total_ctr/10)==0) { System.out.print(trg_output+” Expected:”+out[pctr]); System.out.print(”

Hidden: “+trg_hidden[0]); System.out.println(” “+trg_hidden[1]); } error[pctr]=out[pctr]-trg_output; weight_output[0]=weight_output[0]+learning_rate*error[pctr]*trg_hidden[0]*trg_output*(1-trg_output); weight_output[1]=weight_output[1]+learning_rate*error[pctr]*trg_hidden[1]*trg_output*(1-trg_output); error_h[0]=error[pctr]*trg_output*(1-trg_output)*weight_output[0]; error_h[1]=error[pctr]*trg_output*(1-trg_output)*weight_output[1]; weight_hidden[0][0]=weight_hidden[0][0]+learning_rate*error_h[0]*trg_in[0]*trg_hidden[0]*(1-trg_hidden[0]); weight_hidden[1][0]=weight_hidden[1][0]+learning_rate*error_h[1]*trg_in[0]*trg_hidden[1]*(1-trg_hidden[1]); weight_hidden[0][1]=weight_hidden[0][1]+learning_rate*error_h[0]*trg_in[1]*trg_hidden[0]*(1-trg_hidden[0]); weight_hidden[1][1]=weight_hidden[1][1]+learning_rate*error_h[1]*trg_in[1]*trg_hidden[1]*(1-trg_hidden[1]); if(ctr%(total_ctr/10)==0) { System.out.print(“Error (H): “+error_h[0]); System.out.println(” “+error_h[1]); System.out.print(“W(O): “+weight_output[0]); System.out.println(” “+weight_output[1]); System.out.print(“W(H): “+weight_hidden[0][0]); System.out.print(” “+weight_hidden[1][0]); System.out.print(” “+weight_hidden[0][1]); System.out.println(” “+weight_hidden[1][1]); } } net_error=((error[0]*error[0])+(error[1]*error[1])); if(ctr%(total_ctr/10)==0) System.out.println(“Net Error:”+net_error); } } } Performance is certainly an issue .. It took 1 sec to 3 secs for a single learning cycle .. Which will take 8 hours for XI to learn XOR but it takes just 5 secs for a java program and less that 2 secs for a C code to learn XOR. Taking java into account which gives a reasonable time frame and the availability to put the generic java objects in XI actually gives us unlimited hope of implementing the same in XI. When I started this I can easily implement this as a module and mapping objects utilizing the java objects (generically for each layer) without using any of these transformations yet storing the wts into some database and simultaneously learning the required and implementing the same using XI and all these backpropagations can be done on a java mapping part or a module or imported jars giving a good reasonable timeframe of learning cycles. There is no need for us to implement it the way I implemented it in the blog .. But if I do so, it will be more java oriented as a module & imported jars in mapping and the people who know XI regarding mapping and stuffs and doesn’t know BPN , then what they see is just a garbage. That is the reason I totally implemented in XI without a single java code completely in BPM. Once these 3 series of the blogs are over I actually though of presenting an article (or on my third blog) stating the other ways and designs and fine tuning the nnw part. This is just the beginning of bringing nnw into XI and not the only way of implementing in XI. The way I implemented is for mostly a user understanding cos if I don’t give those equations and the mapping based on those equations then the reader will just be blank if he doesn’t know nnw. Let’s take a sample design…

Hidden: “+trg_hidden[0]); System.out.println(” “+trg_hidden[1]); } error[pctr]=out[pctr]-trg_output; weight_output[0]=weight_output[0]+learning_rate*error[pctr]*trg_hidden[0]*trg_output*(1-trg_output); weight_output[1]=weight_output[1]+learning_rate*error[pctr]*trg_hidden[1]*trg_output*(1-trg_output); error_h[0]=error[pctr]*trg_output*(1-trg_output)*weight_output[0]; error_h[1]=error[pctr]*trg_output*(1-trg_output)*weight_output[1]; weight_hidden[0][0]=weight_hidden[0][0]+learning_rate*error_h[0]*trg_in[0]*trg_hidden[0]*(1-trg_hidden[0]); weight_hidden[1][0]=weight_hidden[1][0]+learning_rate*error_h[1]*trg_in[0]*trg_hidden[1]*(1-trg_hidden[1]); weight_hidden[0][1]=weight_hidden[0][1]+learning_rate*error_h[0]*trg_in[1]*trg_hidden[0]*(1-trg_hidden[0]); weight_hidden[1][1]=weight_hidden[1][1]+learning_rate*error_h[1]*trg_in[1]*trg_hidden[1]*(1-trg_hidden[1]); if(ctr%(total_ctr/10)==0) { System.out.print(“Error (H): “+error_h[0]); System.out.println(” “+error_h[1]); System.out.print(“W(O): “+weight_output[0]); System.out.println(” “+weight_output[1]); System.out.print(“W(H): “+weight_hidden[0][0]); System.out.print(” “+weight_hidden[1][0]); System.out.print(” “+weight_hidden[0][1]); System.out.println(” “+weight_hidden[1][1]); } } net_error=((error[0]*error[0])+(error[1]*error[1])); if(ctr%(total_ctr/10)==0) System.out.println(“Net Error:”+net_error); } } } Performance is certainly an issue .. It took 1 sec to 3 secs for a single learning cycle .. Which will take 8 hours for XI to learn XOR but it takes just 5 secs for a java program and less that 2 secs for a C code to learn XOR. Taking java into account which gives a reasonable time frame and the availability to put the generic java objects in XI actually gives us unlimited hope of implementing the same in XI. When I started this I can easily implement this as a module and mapping objects utilizing the java objects (generically for each layer) without using any of these transformations yet storing the wts into some database and simultaneously learning the required and implementing the same using XI and all these backpropagations can be done on a java mapping part or a module or imported jars giving a good reasonable timeframe of learning cycles. There is no need for us to implement it the way I implemented it in the blog .. But if I do so, it will be more java oriented as a module & imported jars in mapping and the people who know XI regarding mapping and stuffs and doesn’t know BPN , then what they see is just a garbage. That is the reason I totally implemented in XI without a single java code completely in BPM. Once these 3 series of the blogs are over I actually though of presenting an article (or on my third blog) stating the other ways and designs and fine tuning the nnw part. This is just the beginning of bringing nnw into XI and not the only way of implementing in XI. The way I implemented is for mostly a user understanding cos if I don’t give those equations and the mapping based on those equations then the reader will just be blank if he doesn’t know nnw. Let’s take a sample design…