Skip to Content
|

Category Archives: Uncategorized

Introduction

I often ask myself the question “how sap projects can be handled in an agile way and what is needed to enable this”. Since SAP has launched SAP Focused Build, a methodology which enables to work in an agile way in SAP projects, I think this is the answer and solution for my question. In this blogpost I like to share my experiences of working with SAP Focused Build.

SAP Focused Build for SAP Solution Manager 7.2 is a methodology  that supports innovation projects in an agile way. It fits perfect for SAP S/4HANA implementations but can also be used for managing any other build cycle for SAP solutions.

The method is aligned with the Scaled Agile Framework (SAFe) and is designed for large solutions as well as for program level with multiple scrum teams.

The Scaled Agile Framework (SAFe) for Lean Enterprises

Source: https://www.scaledagileframework.com 

 

How SAP Focused Build works

 

The process from requirements to deploy

The process from requirements to deploy has 5 phases and begins with the creation of the requirements and ends with the successfull deployment on the target system.

 

Structure of the elements inside SAP Focused Build

There are 4 elements and these are structured as follows :

 

 

Planning and time dimensions

Several dimensions make it possible to plan the development step by step and in an agile way. The following illustration shows the different dimensions which are alligned with the Scaled Agile Framework (SAFe).

Source: SAP

 

Now dive into the system and have a closer look

Requirements – The Backlog

Requirements are representing user stories in the backlog. The requirements are created in the backlog and will be released after a review. The released requirements represent the workload for development and is planned into waves and sprints.

You can assign requirements directly in your solution documentation at any process or process step. To do so, use the tile “Solution Documentation”.

On the right side you can assign requirements to your processes or process steps. With this, the requirements are linked to the process structure.

 

In the screen « Create new Requirement» you can add more information. After that, you can set the status of the requirement to status «released».

 

Work Packages

The work which has to be done for a requirement can be splitted into serveral work packages. The whole documentation like functional specification as well as test cases from the solution documentation is linked to the work package. A work package represents an epic.

 

Work Items

For a finer granulation work items are now created. Each one represents a user story, which is then the final work for a developer.

 

Handover to Development

Developers can use the tile « My Work Items » to see their assigned work items and can work on these items.

 

Tasks

Optional, the developers can create tasks to organize their work inside a work item.

 

Handover to Test Management

Once the development work is finished, test management can start. For that the test manager prepare a test plan for the relevant work packages and work items which are finished and ready to test. Then the testers can access their assigned test cases with the tile « My Test Executions ». It is a simplified UI, from which the tester can open the assigned test cases, set the status for each test case and if neccessary create defects.

 

After testing is successfully done, the deployment of the sap transports into the next system can start and brings the development cycle to an end.

 

Conclusion

As seen, the methodology SAP Focused Build enables to work in an agile way. There are lots of supporting functions and tools to manage the status workflow, sap transports and progress of the relevant deveoplment which hast to de done.

What I really like is the close integration of the various disciplines such as requirements management, planning, development, test and deployment.

Do not forget, that the use of such a methodology also needs an open mindset of the people who are involved. They need time, a cosy environment and close support to adopt an agile approach like this.

 

Hi! I’m Lucía Subatin. You may remember me from tutorials, blog posts and videos about HANA Express and HANA native development in general.

Source: Lynch, J. (Director) & Oakley, B. (Writer). (November 5, 1992) Marge gets a job in James L. Brooks (Producer) The Simpsons

After the first mini-codejam on First steps with HANA as a Service at SAP Teched Las Vegas, I’d like to share some insights from the perspective of an on-premise HANA user (if I can call myself a user…). Particularly, an SAP HANA, express edition (ab)user.

In this series of posts, I’d like to cover the fundamental differences with other HANA offerings, some architecture basics and it wouldn’t be me if I didn’t point at how to get started/play with it.

What is the SAP Cloud Platform, SAP HANA Service?

Also referred to as HANA as a Service, this is a fully-managed offering for SAP HANA.

What is “fully managed”? It means that all you need to do is decide where you want your instance to be hosted, what kind of functionality you want and how big in terms of RAM you want it to be.

Administration tasks such as installation, backup and upgrades are taken care of the ones who also make the platform, SAP! I call this letting businesses mind their own business.

Choosing where to run

As opposed to its similar sister offering, in the Neo stack, HANA as a Service runs on the Cloud Foundry stack in SAP Cloud Platform. This means that you can also choose between (currently) Google Cloud Platform and Amazon Web Services to host your HANA instance within the available regions:

You can check the available services and regions here.

Choosing what to run

Creation of the database instance could not get any easier. I will show a detailed walk-through in a later blog post, but you can choose between two flavors:

  • standard edition: Core database + Series data
  • An enterprise edition: Same as standard + cool advanced analytics (text analytics, graph, geospatial, predictive, etc) that SAP HANA knows how to do so well.

In reality, this is done before the creation of the actual service instance as your account should have quota available for any or both services. This means you or someone has purchased such quota.

How many SAP HANA blocks

This is one of the sweetest differences, as well as one of the clear advantages of running in the cloud: taking the fear of sizing running short as adding more hardware is not your problem.

Of course, you should do your homework and use any of the sizing tools available to anticipate how much your legacy database will shrink when data is migrated (and compressed) into HANA. However, one of the beauties of the cloud is that you have the possibility to scale up.

Source: Sandoval, S. (Director) & Groening, M. (Writer). (July 1, 2010) Attack of the Killer App in Futurama

But how much money? Pricing is explained clearly here. You can even use a calculator.

I’m an advocate for free and trial for developers and I’ll get to that in a future post. But the amount of HANA blocks (RAM) and cost go hand-in-hand and is the only math you need to do.

You will see variations of the cost depending on the region and the edition (standard or enterprise).

You should also notice that you can have a subscription for what you are planning to consume in advance or you can pay for what you consume, metered by the hour.

And that’s it. Choose the region, flavor and blocks. You can then use those blocks

In technical terms, what is HANA as a Service?

Plain and simple, when you deploy an instance you get access to your very own tenant database in its very own system database, which you are not sharing with anyone else.

The service piece of it is that SAP will take care of everything related to making your instance run and make sure you get the backups every 15 minutes and saved up to until a month in time since they are created.

The juicy technical details are interesting enough to deserve their own post. So stay tuned Twitter or LinkedIn as it’s coming out soon!

Now with more and more customers to adopt SAP BPC Optimized for S/4 HANA for finance (“BPC”) as the planning platform, one of the critical requirement is to trigger planning sequence as backend job from Analysis Office(“AO”), especially when the planning sequence takes long time to complete.

 

Before we dive deep into this topic, assumed we have one planning sequence (PS_1) which contains the real business logic we want to run in backend job, and it needs a parameter fiscal period from AO.

 

If we insert this planning sequence to AO workbook, and execute it from there directly, it will run in foreground mode, and Excel will be freezing before it is done. We must look for other solution for this requirement.

 

Step 1: Execute planning sequence in process chain

There are several ways to run planning sequence as backend job. You can manage it through ABAP code with function module – RSPLSSE_PLSEQ_EXECUTE, or just make it easier to use BW process chain. Here we’ll discuss the later one since there is another benefit with this option – to manage parallel running for the planning sequence.

Refer to SAP help document about this topic.

We know PS_1 needs parameter “fiscal period”, and it should be specified by end user in AO, so just let the field “Variable Variants” blank. If parallel process is needed, section 2 is for that purpose.

Step 2: Define planning function type to trigger process chain

Create a custom planning function type to trigger process chain (Z_TRIGGER_PC) in RSPC.

To make sure the planning function will be executed even there is no data in the region specified by the filter, “process Empty records” must be selected.

The other problem should be addressed in this function is to retrieve the parameter value from AO, and pass it to PS_1 (which is included in process chain).

In this program, we use a custom table to save the parameter value firstly, and then retrieve it with customer exit variable. The customer exit variable will be used in PS_1.

 

Step 3: Trigger process chain from AO

Process chain can’t be executed from AO directly, so we need to leverage another planning function as a trigger. At same time, we also need to pass parameter fiscal period from AO to backend.

We create another real-time InfoCube RCUBE02 which only include 0FISCPER and one key figure, and on top of it we create another Aggregation level AL01.

On Aggregation level AL01, we’ll create below two items:

  1. One query Q_01 with input parameter for fiscal period, which will generate input in AO prompt dialog.
  2. One planning function PF_TRIGGER with type Z_TRIGGER_PC (refer to below for more information about this custom type).

Insert query Q_01 and planning function PF_TRIGGER into one workbook, and now in the workbook prompt dialog, we can input/select fiscal period.

The only left problem is to pass the query variable value to planning function PF_TRIGGER.

There are several options for this problem:

  1. Read query variable value through VBA code, and then assign it to planning function variable.
  2. To retrieve query variable value in Excel cell with formula, and then specify that cell as source for planning function variable.

Once this problem is resolved, when user selects one period and execute planning function PF_TRIGGER from AO, the parameter will be passed to backend, and custom code in PF_TRIGGER will firstly write variable value to a z-table, and then trigger process chain. Once process chain is executed, the custom exit variable used in planning sequence will get value from z-table, and then it will be used in PS_1 (the real business logic).

In case you have been using SAP Analytics Cloud, you probably have noticed and most likely even used some of the available options for filtering your data.

SAP Analytics Cloud has the option to define:

  • Story Filter
  • Page Filter
  • Filter for specific charts / tables

 

In case you are using SAP Analytics Cloud in combination with SAP BW live data, then on top of the filter options in your story, you also have the ability to define BEx Variables for filtering already in the BEx query and in case of SAP HANA as datasource you have similar options available.

And last but not least, you also have the functionality of Linked Analysis, which generates filter values based on the user interaction.

It is important to understand the different options that are available to the Story Designer and Story consumer and how those different options will be applied to your data set and how your different visualizations – such as charts and tables – are impacted by the different type of filter.

So in the next few days I will try to outline the different filter scenarios and provide examples for a few combinations of those different filter, so that you can understand how those filters will be applied.

Now lets take a look at what options we have in SAP Analytics Cloud itself.

 

Story Filter

The first option for filtering in SAP Analytics Cloud is a Story Filter.

You can simply click on the Filter icon in the Tools toolbar and select the dimension for filtering and the dimension name and the filtered values will be shown in form of a filter line (see above).

A story filter will impact all the elements in your story across all the pages.

 

Page Filter

You can then use the Story Filter and turn the Story Filter into a Page Filter (or you create a new Page Filter).

After you turned your Story Filter into a Page Filter, you can also resize the filter to your needs and based on the given space, the filter will adapt the layout.

As indicated by the name, the Page Filter will only impact elements on the given page and not the complete story (assuming there are multiple pages).

In addition you have the option to decide, which of the visualizations should be impacted by the Page Filter.

Another option to create page filters is to simply use the option to create new Input Controls.

 

Filter on a chart / table level

The third option for filtering is to create / change the filter values directly in a particular visualization – in this example the chart.

In this example we have a chart which shows the Margin by Sales Manager and we select the first Sales Manager from the chart.

After we filtered the chart, the chosen values are shown in the chart title and the user can then simply click on the filter…

… and then open the list of available values and set the new filter values.

In addition to the option to define a Story Filter, or a Page Filter, or a Filter for one specific visualization (Widget / Component filter), you can also use Linked Analysis to generate filter values with a simple navigation done by your users, such as using a selection in the Table to filter a chart.

 

So we clarified what kind of filter / variables we have available in SAP Analytics Cloud. In the next steps, we will take a look at the different filter options and how those different options are being applied to your data.

In addition we will look at combinations of filter types and review scenarios where the story designer is using several different filter types in a single story and how the different filter options relate to each other.

 

I’m a big fan of Chrome OS, it’s my primary choice for computing for many reasons including high security, consistency, efficiency and practicality. I have a lot of devices running it – a Google Pixelbook as well as an older Samsung Chromebook, an ASUS Chromebit and a shiny new ASUS Chromebox – the N005U. I even have a version of Chrome OS running on my old iMac 24″, via Neverware’s CloudReady system.

The advent of beta support for Linux on Chrome OS is very interesting and an opportunity for me to try out running Visual Studio Code (VSCode) locally. I wasn’t disappointed.

I’m also very interested in the Application Programming Model for SAP Cloud Platform, and its agnostic approach to development environments and deployment targets & runtimes. I installed the cds tool and the extension for VSCode to get a feel for local development with the model on Chrome OS. This post is a brief account of the steps I took, in case you want to do that also.

 

Turning on Linux support

Initially only in the Chrome OS beta channel, the support for Linux, at least on my Pixelbook and Chromebox is now also available on the stable channel. That said, I use the beta channel on both of these devices, in case you’re wondering.

Turning it on is simply a matter of a single click in the settings, whereupon a Linux container will be downloaded and started up:

 

A short while later, a lovely calming terminal appears, the sign for me of a real operating system. This signals the successful completion of the process:

 

Installing VS Code and CDS Language Support

VS Code is available for different platforms from the “Download Visual Studio Code” page. As the image is Debian GNU/Linux 9 (you can see this in the /etc/issue file), I chose the 64 bit .deb file. At the time of writing, this is code_1.28.1-1539281690_amd64.deb reflecting VS Code version 1.28.

While I was in download mode, I went to the SAP Development Tools for Cloud download page and downloaded the official CDS Language Support feature for VS Code:

Note that as the Linux support is via a container, you have to transfer downloaded files to it. The File Manager makes this easy. I just dragged the two downloaded files into the “Linux files” folder that represents the home directory of “qmacro” (my Google ID) in the terminal above:

 

You can install Linux packages like the .deb file very easily, by using the file’s context menu item “Install with Linux (Beta)”. I did this for the VS Code package:

and in a short time received a notification that the install had completed:

I then had a new icon in the tray:

I started VS Code up and used the instructions on the SAP Development Tools for Cloud download page (see above screenshot – basically following “step 3”) to install the CDS Language Support extension directly from the VSIX file.

So far so good!

 

Install other extensions

I installed a couple of other VSCode extensions, but these aren’t essential. I’m a big vim user, so I use the Vim extension for VSCode, and I also installed the SQLite extension for comfortable in-IDE browsing of SQLite databases.

 

Installing Node.js

Next I needed to install Node.js. There are many ways to do this, but I find the Node Version Manager (nvm) to be very useful, and it has a nice side effect of preventing you from getting into a tangle with root privilege requirements – everything you do in Node.js installations via nvm should *not* require the use of root (sudo) so you can’t shoot yourself in the foot.

In the Linux terminal, I used the curl-based installation approach described on the nvm GitHub repository homepage:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

and after a few seconds I was ready to use nvm to install version 8 of Node.js, which I did like this:

nvm install 8

Simple! That gave me node version 8.12.0 and npm version 6.4.1.

 

Installing the cds tool

CDS is at the heart of the Application Programming Model and there’s command line support in the form of a tool called ‘cds’ in the ‘@sap’ npm namespace, available from the SAP NPM repository. Read more about this repository in this post by Sven Kohlhaas “SAP NPM Registry launched: Making the lives of Node.js developers easier“.

To make use of the ‘@sap’ namespaced modules, it’s necessary to tell npm about this registry:

npm config set @sap:registry https://npm.sap.com

Now we can install the cds tool. I did it globally, rather than for a specific Node.js project. Note that because of the nice side effect of nvm mentioned earlier, globally still means within my user space:

npm i -g @sap/cds

That’s the cds tool installed.

If you’re following the SAP TechEd related set of exercises that I mentioned in “Application Programming Model for SAP Cloud Platform – start here“, then you probably also want to explicitly install the cds generator too:

npm i -g @sap/generator-cds

 

All set!

At this point, I’m all set, and if you’ve been following along, you are too!

We’ve got a lovely local development environment for the Application Programming Model, on a proper operating system, with great tools and a competent IDE with rich support for the CDS work we’ll be doing.

If you’re wondering what to do next, you might want to try the exercises in the GitHub repository “SAP/cloud-sample-spaceflight-node” – happy hacking!

On October 10th, Andrea Anderson and I represented SAP at the inaugural Tech Industry Night at the University of Notre Dame. SAP’s David Fowler was also at the event to participate in the networking segment. We were invited to participate because of my membership on the Mendoza College of Business Corporate Advisory Board and were joined by technology professionals from Google, Nielson, Sony and IBM. It was a lively discussion attended by about fifty graduate students and we were fortunate to speak with many of them individually during the networking portion of the event. I believe we helped to demystify the tech industry and highlighted the great opportunities and rewards of careers in our field.

The event was targeted at graduate business students to help them consider and pursue career opportunities in the tech sector. The high-level objectives for the evening included:

  •  Defining the Tech Sector
  •  Providing a clearer understanding of Functions & Roles in Tech
  •  Notre Dame metrics related to Tech Placement
  •  Preparing & Researching Tech Careers
  •  Q&A with Panelists

Some of the top questions for the panelists included:

  • How “techy” were you?
  • Personal experiences breaking into Tech for the non-tech?
  • What are your day to day tasks?
  • Top Technology Trends & how do you stay up to date?
  • How do Enterprise Sales differ in the Tech Industry & how do we prep if we’ve never sold into enterprise teams?
  • Tech Opportunities for Growth?
  • Tech Industry Challenges?
  • How would you design tech into the MBA program?

This blog covers some of the latest new features and enhancements in SAP Analytics Cloud and SAP Digital Boardroom release 2018.18. Please note that the Analytics Cloud Help documentation is updated at the same time as the upgrade to this release, so the links here may not yet reflect what is described below until after the upgrade is complete.
Upgrade your SAP Analytics Cloud agent to the latest agent version 1.0.121 to take advantage of all the data acquisition types!
If you haven’t upgraded yet, here are the data acquisition minimum requirements:

  • Version 1.0.109 for SAP Business Warehouse (BW)
  • Version 1.0.99 for SAP ERP / SQL databases
  • Version 1.0.91 for SAP Universe (UNX)
  • Version 1.0.75 for all other data source types

For more information, see System Requirements and Technical Prerequisites.

Highlights of this release:

Learn with our latest video tutorials

Planning

Reassign in-progress input tasks

Don’t sweat it if your input tasks need to be reassigned! Even if your input task is in progress you’ll still be able to reassign it to someone else.

Apply pre-defined calculations on single or multiple measures

We’ve designed this feature to simplify your interactions with tables. For tables, you can now select a single measure header or multiple measure headers in the table and then apply a predefined calculation such as Average, Running Total, and so on.

Set read-only status

Now, you don’t need to worry about your tables being modified by other users! When you’re creating formatting rules, you can now set the read-only status for table cells. With the new drop-down box, if you apply a parent cell to read-only, the status will carry over to its children too.

Forward input task to different organization level hierarchies

Maybe someone sent you an input task that’s a better fit for someone else. It’s okay! Now, you can now forward the input task to someone else, even if they’re at a different level in your organization’s hierarchy.

Calendar Gantt improvements

In the Calendar Gantt year view, you can now drag and drop objects to different dates. Also, you can now see the tasks that you own (assigned as Owner) in both calendar and Gantt view.

Calendar enhancements

By setting the Activate option in the calendar, you can now send the tasks or processes out to other users. Also, users can choose whether they want assignees to see other assignees for a task or not.

Data Integration

Improvements to SQL data acquisitions

Increase your productivity! Now that SQL Data is acquired synchronously, you can work while the data is being processed. The SQL data sources can now be queried from within stories, support Incremental load and sharing of connections. These improvements are on par with ODATA, Google Big Query, Google Drive, SAP BW, SAP Universe, SAP SuccessFactors, SAP Cloud for Customer and SAP S/4HANA.

View data file uploads via the Data Sources Tab

New uploads after 2018.18 now supports viewing of data file uploads for models on the Data Sources tab.

Support for live connections to BPC embedded configurations

You can now directly connect to BPC embedded model, build a story on BPC live model, lock cells and submit data back to BPC. Functions such as workstatus, disaggregation, formulas, currency translations, etc. in BPC embedded models can be used as well.

With the option of leveraging tables and charts as a client, you can enjoy a high flexible and LOB-tailored user experience. The powerful planning engine in BPC embedded is used to realize an advanced and complex enterprise-wide planning.

Please note that advanced planning, predictive, Data Actions, and VDP are not yet supported.

Mobile

SDK enablement support

This enablement now supports customers who want to host the SAP Analytics Cloud app in their internal Appstore’s, in addition to managing the life cycle of the app. This will be available in SMP for download and requires XCod 9.X, iOS 10 or iOS 11.

 

Administration & Infrastructure

Additional infrastructure improvements

Say goodbye to the times from when it was necessary to first revert to the default authentication before switching to a different custom IdP. With this new update, system owners who have already configured for custom SAML authentication can now switch directly from one custom IdP to another.
Administrators can now see in System > Monitor if they are licensed for Digital Boardroom.

On systems provisioned on Cloud Foundry/AWS, system owners can now enable custom SAML authentication using custom SAML NameID mappings rather than having to use email as the NameID. In addition, system owners can register OAuth clients for API Access through System > Administration > App Integration.

Smart Assist

Search to Insight enhancements

Get ready, everyone, because we have some great Smart Assist enhancements coming your way! When using the Search to Insight feature, you can now search HANA live models included in your story. However, the model must be indexed through the modeler first enable to grant access to every user in the tenant.

Please note that this is for HANA live models only and requires SAP HANA 1.0 SPS12 rev 122.14 or higher.

Scenario 1: User creates a model from a connection for the first time and hasn’t saved the model yet. Search to Insights is under the “General Model Settings” and the Create Index button is disabled.

Scenario 2a: Model is saved and has never been indexed before, therefore it doesn’t show the last indexed timestamp. Create Index button is enabled.

Scenario 2b: User starts indexing. At this stage, the confirmation dialog will appear with an estimate of how long indexing will take for this model. Don’t close or refresh the brows when it is being processed!


User opens the indexed model preference’s dialog. A timestamp will be shown for the last updated index of this model and the “delete index” will be visible as an option.

Data Visualization

Configure initial explorer view

Story designers can now open explorer from a chart or table and configure it to control what viewers see when they launch explorer. Multiple Explore tabs can be configured, where one can be set as default. However, explorer in Digital Boardroom only currently shows the default view. Story designers can use explorer to show more details in a table, for example.

Bookmark personal explorer views

Story viewers can create multiple views in explorer and persist these in a story bookmark. These bookmarks remain private and contain filter, input controls, prompts and explorer views.

Intelligent Enterprise

Choose variable value from a Customer exit or SAP BW backend

When using an SAP BW Bex Query, you can now choose whether a variable value comes from a Customer Exit, either of which is dynamically processed during runtime form backend, (e.g. current day, changes every day), or from a saved value that is a manually input by the user (saved in the story).

For example, a date value like “10.10.2018” only changes when a user is setting another value.

Additional resources:

Previous feature summary blogs:

**Legal disclaimer
SAP has no obligation to pursue any course of business outlined in this blog or any related presentation, or to develop or release any functionality mentioned therein. This blog, or any related presentation and SAP’s possible future developments, products and or platforms directions and functionality are all subject to change and may be changed by SAP at any time for any reason without notice. The information in this blog is not a commitment, promise or legal obligation to deliver any material, code, or functionality. This blog is provided without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. This blog is for informational purposes and may not be incorporated into a contract. SAP assumes no responsibility for errors or omissions in this document, except if such damages were caused by SAP’s willful misconduct or gross negligence.

All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially from expectations. Readers are cautioned not to place undue reliance on these forward-looking statements, and should not be relied upon in making purchasing decisions.

Higher Education by its very name and implied nature brings images of higher thinking, advanced thought leadership and forward-looking concepts.  It is perhaps odd then, that often the systems and solutions that support higher education are disjointed, outdated and counter intuitive.  Siloed systems with legacy custom solutions are more the norm than the exception.  Nowhere may this be more pronounced than within the student application process for admissions.

If you have gone through this process recently you have seen how disjointed the application process can be: Test scores, grades, referrals, recommendations, advanced placement criteria, interviews…the process is both a maze and a marathon.  Further, in many cases there is no central coordination point within institutions, and most schools have very different processes and procedures.  Factoring in that an average high school senior may apply to 5 or 6 different university choices, the myriad processes and tracking of those processes can be (at the least) daunting.

While the sheer number of information streams is at the center of the process challenge, the disparate university systems are more than a contributing factor.  Grades and transcripts must be pulled from one area and sent to another, standardized test scores are delivered from another, while still interviews are coordinated from a third.  Campus visits and placement counselor scheduling is hoped to be run by the same central office, but often may be yet another coordination point.

The development of the Common Application several years ago was thought to improve the streamlining and consistency of the process – and it does…for those schools that choose to participate; Many do, but many more do not.  This still leaves countless crisscrossing workstreams and more importantly alternate deadlines for submissions; and do not miss any deadlines, because flexibility from Colleges and Universities is minimal to zero.

As Economics continues to challenge the viability of many institutions of higher learning, and the marketing and recruitment of students continues to escalate as a university priority, perhaps more thought should go into the application process, and the systems that support that challenging process.  In the absence, the application process may actually be a deterrent for rising high school seniors.

In general, university and college systems are inefficient, outdated and decentralized.  Economics and the ongoing budget challenges of colleges and universities may be among contributing factors, but often risk aversion and the threat of upheaval to long accepted operating practices are at the heart of the systems stagnancy.  Tenured university professors want to focus on research and discovery, not be mired in administration and process.  Further, professors do not like to be managed by a central office – department-by-department operations are a means and an end to uninterrupted research (on an island).

Higher Education by its very nature brings images of advanced thought leadership.  It is perhaps odd then, that often the systems and solutions that support higher education are disjointed, outdated and counter intuitive.  Siloed systems with legacy custom solutions are more the norm than the exception.   As the economics for universities continue to evolve, the very real need to drive operational efficiency is of increasing importance.  Centralized systems and operations can be a conduit for that transformation, but change is not easy.  Iterative system updates and replacement may seem the more careful course to take on the surface, but resistance and obstacles may prolong real change.  This does not mean that careful planning is not of paramount importance.  Detailed plans should be undertaken early to comprehensively assess all processes and workflows within the overall system.  The actual act of cross-over, however, should be quick and acute.  Upheaval and unpleasantness is likely as people are often resistant to change (especially tenured University staff) – that resistance may delay, or derail intended operational changes.

Given the current balancing act many institutions are undertaking with revenue and expense models, and operating budgets that must evolve with the changing dynamic of the institution, nothing breeds reasonableness like desperation; and many institutions are facing desperate times.  Economics continues to challenge the viability of many institutions of higher learning, and the marketing and recruitment of students continues to escalate as a university priority, ostensibly to drive enrollment numbers (translating to tuition and fees).  But maybe less focus should be placed on that one side of the balance sheet.  Ultimately, the efficiency of operations may be the necessary adaptation that will ensure a university or colleges ongoing livelihood.  Some forward-thinking institutions are sharing their perspectives across their peers to drive that necessary innovation (#SAPInnovation).

Change can be painful, but sometimes change is inevitable and must be accepted; perhaps Carpenters and Mother’s know best.  The Carpenter’s proverb states, measure twice and cut once.  For university upgrades, a strong and thorough systems analysis that includes comprehensive reviews of staffing and organizational operating environments is critical.  After that analysis and facing inevitable change, your Mom was right; like a Band-Aid – pull it off quick and completely.

There are several data center trends to pay attention to: edge computing, all-flash solutions, hybrid cloud, data center efficiency like air flow, liquid cooling etc. – but one that stands out is hyperconverged technologies. Hyperconverged infrastructure (HCI) can significantly simplify data center design allowing new agility and scalability because it more closely couples previously separate components like compute, storage, network, and other components by leveraging software-defined solutions. 

SAP is committed to helping customers get the most out of their SAP HANA deployment. Driving openness and cost effectiveness is part of our commitment to delivering customer value. SAP knows from customer feedback that the majority of organizations using SAP HANA want to continue using proven IT processes and harnessing their IT infrastructure investment for SAP HANA. And, increasingly, customers want to benefit from highly flexible and cost-cutting approaches – including commodity hardware and cloud deployment models.

That’s why SAP continues to look into new ways to bring down TCO by leveraging latest technologies. With certification now available on hyperconverged infrastructures, SAP and its ecosystem of technology partners delivers the next big step in driving down cost, increasing simplicity, and paving the way to the cloud.

HCI is an IT framework that combines compute, storage, network and other components into a single system which reduces data center complexity and increases scalability. It includes a hypervisor for virtual compute nodes that typically run on commodity servers. In the future, HCI solutions may also leverage public infrastructure-as-a-service solutions and will be a foundation for hybrid solutions managing on-premise and IaaS cloud deployments.

To pass the SAP certification criteria all components of an HCI solution need to pass a dedicated validation. At the end, all combined components need to pass as an end-to-end solution that ensures isolation, so that other workloads do not affect SAP HANA operations.  

Several HCI partner solutions are certified.  (https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/hci.html)

The solutions are designed to serve mission-critical workloads and we worked with our technology partners to put a global support process in place for HCI for SAP HANA.

In a nutshell:

Hyperconverged Infrastructure simplifies management of your private cloud deployment and can build the bridge to public cloud. HCI can reduce total cost of ownership and helps you to support your business faster and with higher flexibility.

The future will tell if HCI will keep its promise to simplify management and increase infrastructure utilization. If the performance results and memory sizes for HCI meet your demands, SAP HANA customers can take that first step confidently and explore the HCI value potential.