Skip to Content
Author's profile photo Former Member

HANA Monitoring Handy SQL’s

Monitoring Memory Usage

Used Memory

The total amount of memory in use by SAP HANA is referred to as its Used Memory. This is the most precise indicator of the amount of memory that the SAP HANA database uses at any time

When used:  To understand the current used memory in HANA when HANA alerts shows usage greater than licensed memory. Understanding memory usage by components will help in troubleshooting and perform necessary memory clean up actions.

Display the current size of the Used Memory; you can use the following SQL statement

SELECT

        ROUND(SUM(TOTAL_MEMORY_USED_SIZE/1024/1024/1024),

        2) AS “Used Memory GB”

FROM SYS.M_SERVICE_MEMORY;

Display current used memory for Column Store Tables

SELECT

        ROUND(SUM(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS “Column Tables MB Used”

FROM M_CS_TABLES;

Display current memory used breakdown by Schema

SELECT

        SCHEMA_NAME AS “Schema”,

        ROUND(SUM(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS “MB Used”

FROM M_CS_TABLES

GROUP BY SCHEMA_NAME

ORDER BY “MB Used” DESC;

Display memory usage by components

SELECT

        host,

        component,

        sum(used_memory_size) used_mem_size

FROM PUBLIC.M_SERVICE_COMPONENT_MEMORY

group by host,

        component

ORDER BY sum(used_memory_size) desc;


Database resident

Resident memory is the physical memory actually in operational use by a process.

SELECT SUM(PHYSICAL_MEMORY_SIZE/1024/1024/1024) “Database Resident” FROM M_SERVICE_MEMORY;

Find the total resident on each node and physical memory size

SELECT

        HOST,

        ROUND(USED_PHYSICAL_MEMORY/1024/1024/1024,

        2) AS “Resident GB”,

        ROUND((USED_PHYSICAL_MEMORY + FREE_PHYSICAL_MEMORY)/1024/1024/1024,

        2) AS “Physical Memory GB”

FROM PUBLIC.M_HOST_RESOURCE_UTILIZATION;

Find total Resident

SELECT

        T1.HOST,

        (T1.USED_PHYSICAL_MEMORY + T2.SHARED_MEMORY_ALLOCATED_SIZE)/1024/1024/1024 “Total Resident”

FROM M_HOST_RESOURCE_UTILIZATION AS T1 JOIN (SELECT

        M_SERVICE_MEMORY.HOST,

        SUM(M_SERVICE_MEMORY.SHARED_MEMORY_ALLOCATED_SIZE) AS SHARED_MEMORY_ALLOCATED_SIZE

       FROM SYS.M_SERVICE_MEMORY

       GROUP BY M_SERVICE_MEMORY.HOST) AS T2 ON T2.HOST = T1.HOST;

Maximum peak used memory

SAP HANA database tracks the highest-ever value of Used Memory reached since the database was started. In fact, this is probably the single most significant memory indicator that you should monitor as an overall indicator of the total amount of memory required to operate the SAP HANA database over a long period of time.

SELECT

        ROUND(SUM(“M”)/1024/1024/1024,

       2) as “Max Peak Used Memory GB”

FROM (SELECT

        SUM(CODE_SIZE+SHARED_MEMORY_ALLOCATED_SIZE) AS “M”

       FROM SYS.M_SERVICE_MEMORY

       UNION SELECT

        SUM(INCLUSIVE_PEAK_ALLOCATION_SIZE) AS “M”

       FROM M_HEAP_MEMORY

       WHERE DEPTH = 0);

Peak used memory

SAP HANA maintains a special Used Memory indicator, called the Peak Used Memory. This is useful to keep track of the peak value (the maximum, or “high water mark”) of Used Memory over time. Here is how to read the Peak Used Memory:

SELECT

        ROUND(SUM(“M”)/1024/1024/1024,

       2) as “Peak Used Memory GB”

FROM (SELECT

        SUM(CODE_SIZE+SHARED_MEMORY_ALLOCATED_SIZE) AS “M”

       FROM SYS.M_SERVICE_MEMORY

       UNION SELECT

        SUM(INCLUSIVE_PEAK_ALLOCATION_SIZE) AS “M”

       FROM M_HEAP_MEMORY_RESET  

       WHERE DEPTH = 0);


Memory usage in server

free –g | awk ‘/Mem:/ {print “Physical Memory: ” $2 ” GB.”} /cache:/ {print “Resident: ” $3 ” GB.”}’


Memory Cleanup: Forcing Garbage collector from Server

Login to Hana server -> open HDBAdmin.sh and navigate to Services -> Console

Select the node where the garbage collection to be triggered. Execute the below command

mm gc –f

The garbage collector will be triggered, and free up the memory. This will not unload the tables.

Resetting Monitoring Views

When Used: when testing a report or need to monitor the peak of memory usage by a SQL, monitor IO, memory objects throughput and statistics about garbage collection jobs. The below will allow to reset these statistics.

Memory allocator statistics

M_HEAP_MEMORY view contains information about memory consumption of various components in the system.

ALTER SYSTEM RESET MONITORING VIEW SYS.M_HEAP_MEMORY_RESET;

M_CONTEXT_MEMORY view contains information about memory consumption grouped by connections and/or users.

ALTER SYSTEM RESET MONITORING VIEW SYS.M_CONTEXT_MEMORY_RESET;

File access statistics

M_VOLUME_IO_STATISTICS_RESET view shows information about basic I/O operations on I/O subsystems (that is, paths).

ALTER SYSTEM RESET MONITORING VIEW SYS.M_VOLUME_IO_STATISTICS_RESET;

Memory object statistics

M_MEMORY_OBJECTS_RESET view provides information about the number and size of resources currently in the resource container and about the throughput of the resource container.

ALTER SYSTEM RESET MONITORING VIEW SYS.M_MEMORY_OBJECTS_RESET;

Garbage collection/history manager statistics

M_GARBAGE_COLLECTION_STATISTICS_RESET view shows various statistics about garbage collection jobs.

ALTERSYSTEMRESET MONITORING VIEW SYS.M_GARBAGE_COLLECTION_STATISTICS_RESET;


Schema/Tables Monitoring

Find Tables loaded into memory & delta records

When used: To see what tables are loaded to memory at any given time; If a report is running slow see if the table is loaded to memory though the tables goes on lazy loading it is a best practice to have the table loaded to memory.

SELECT

        LOADED,

       TABLE_NAME,

        RECORD_COUNT,

        RAW_RECORD_COUNT_IN_DELTA ,

        MEMORY_SIZE_IN_TOTAL,

        MEMORY_SIZE_IN_MAIN,

        MEMORY_SIZE_IN_DELTA

from M_CS_TABLES

where schema_name = ‘SCHEMA’

order by RAW_RECORD_COUNT_IN_DELTA Desc

To drill down further and see what columns is not loaded /loaded please use below

Select top 100 LOADED,

HOST,

TABLE_NAME,

COLUMN_NAME,

MEMORY_SIZE_IN_TOTAL

from PUBLIC.M_CS_COLUMNS

WHERE SCHEMA_NAME = ‘SCHEMA’

AND LOADED <> ‘TRUE’

MERGE DELTA

See if there is delta to be merged. RAW_RECORD_COUNT_IN_DELTA will provide the delta count.

SELECT

        LOADED,

       TABLE_NAME,

        RECORD_COUNT,

        RAW_RECORD_COUNT_IN_DELTA ,

        MEMORY_SIZE_IN_TOTAL,

        MEMORY_SIZE_IN_MAIN,

        MEMORY_SIZE_IN_DELTA

from M_CS_TABLES

where schema_name = ‘SCHEMA’

order by RAW_RECORD_COUNT_IN_DELTA Desc

Forcing delta Merge

UPDATE SCHEMA.COLUMN_STATISTICS MERGE DELTA INDEX;

Smart merge

UPDATE <table_name> MERGE DELTA INDEX WITH PARAMETERS (‘SMART_MERGE’=‘ON’)

Find Auto Merge On

select TABLE_NAME, AUTO_MERGE_ON from SYS.TABLES


Find Compression

When used: To see the uncompressed size and the compression ratio in HANA for the loaded tables.

SELECT top 100 “SCHEMA_NAME”,

sum(“DISTINCT_COUNT”) RECORD_COUNT,

sum(“MEMORY_SIZE_IN_TOTAL”) COMPRESSED_SIZE,

sum(“UNCOMPRESSED_SIZE”) UNCOMPRESSED_SIZE,

(sum(“UNCOMPRESSED_SIZE”)/sum(“MEMORY_SIZE_IN_TOTAL”)) as COMPRESSION_RATIO,

100*(sum(“UNCOMPRESSED_SIZE”)/sum(“MEMORY_SIZE_IN_TOTAL”)) as COMPRESSION_PERCENTAGE

FROM “SYS”.“M_CS_ALL_COLUMNS”

GROUP BY “SCHEMA_NAME”

having sum(“UNCOMPRESSED_SIZE”) >0

ORDER BY UNCOMPRESSED_SIZE DESC ;

To go on a detail level and identify what type of compression is applied on each column and the ratio please use below

select

        COLUMN_NAME,

        LOADED,

        COMPRESSION_TYPE,

        MEMORY_SIZE_IN_TOTAL,

        UNCOMPRESSED_SIZE,

        COMPRESSION_RATIO_IN_PERCENTAGE as COMPRESSION_FACTOR

from M_CS_COLUMNS

where schema_name = ‘SCHEMA’

Forcing compression on a table

update SCHEMA.COLUMN_STATISTICS  with parameters (‘OPTIMIZE_COMPRESSION’ = ‘TRUE’);

Find which node is active

to find which node your session is connected to

SELECT

        HOST,

        PORT,

        CONNECTION_ID

FROM M_CONNECTIONS

WHERE OWN = ‘TRUE’;

Expensive Statements

Ensure the expensive statement trace is ON

When used: To troubleshoot a report failure or a sql failure and understand why it failed. Also to monitor the expensive sqls executed in HANA. Identify the ways for performance optimization.

Find expensive statements for errors

SELECT

       “HOST”,

        “PORT”,

        “CONNECTION_ID”,

        “TRANSACTION_ID”,

        “STATEMENT_ID”,

        “DB_USER”,

        “APP_USER”,

        “START_TIME”,

        “DURATION_MICROSEC”,

        “OBJECT_NAME”,

        “OPERATION”,

        “RECORDS”,

        “STATEMENT_STRING”,

        “PARAMETERS”,

        “ERROR_CODE”,

        “ERROR_TEXT”,

        “LOCK_WAIT_COUNT”,

        “LOCK_WAIT_DURATION”,

        “ALLOC_MEM_SIZE_ROWSTORE”,

        “ALLOC_MEM_SIZE_COLSTORE”,

        “MEMORY_SIZE”,

        “REUSED_MEMORY_SIZE”,

        “CPU_TIME”

FROM  “PUBLIC”.“M_EXPENSIVE_STATEMENTS”

WHERE ERROR_CODE > 0

ORDER BY START_TIME DESC;

Finding expensive statements executed by User

SELECT

       “HOST”,

        “PORT”,

        “CONNECTION_ID”,

        “TRANSACTION_ID”,

        “STATEMENT_ID”,

        “DB_USER”,

        “APP_USER”,

        “START_TIME”,

        “DURATION_MICROSEC”,

        “OBJECT_NAME”,

        “OPERATION”,

        “RECORDS”,

        “STATEMENT_STRING”,

        “PARAMETERS”,

        “ERROR_CODE”,

        “ERROR_TEXT”,

        “LOCK_WAIT_COUNT”,

        “LOCK_WAIT_DURATION”,

        “ALLOC_MEM_SIZE_ROWSTORE”,

        “ALLOC_MEM_SIZE_COLSTORE”,

        “MEMORY_SIZE”,

        “REUSED_MEMORY_SIZE”,

        “CPU_TIME”

FROM  “PUBLIC”.“M_EXPENSIVE_STATEMENTS”

WHERE STATEMENT_STRING LIKE ‘%NAIRV%’

 

CONNECTIONS

Find running connections

SELECT “HOST”, “PORT”, “CONNECTION_ID”, “TRANSACTION_ID”, “START_TIME”, “IDLE_TIME”, “CONNECTION_STATUS”, “CLIENT_HOST”, “CLIENT_IP”, “CLIENT_PID”, “USER_NAME”, “CONNECTION_TYPE”, “OWN”, “IS_HISTORY_SAVED”, “MEMORY_SIZE_PER_CONNECTION”, “AUTO_COMMIT”, “LAST_ACTION”, “CURRENT_STATEMENT_ID”, “CURRENT_OPERATOR_NAME”, “FETCHED_RECORD_COUNT”, “AFFECTED_RECORD_COUNT”, “SENT_MESSAGE_SIZE”, “SENT_MESSAGE_COUNT”, “RECEIVED_MESSAGE_SIZE”, “RECEIVED_MESSAGE_COUNT”, “CREATOR_THREAD_ID”, “CREATED_BY”, “IS_ENCRYPTED”, “END_TIME”, “PARENT_CONNECTION_ID”, “CLIENT_DISTRIBUTION_MODE”, “LOGICAL_CONNECTION_ID”, “CURRENT_SCHEMA_NAME”, “CURRENT_THREAD_ID”

FROM “PUBLIC”.“M_CONNECTIONS”

WHERE  CONNECTION_STATUS = ‘RUNNING’

ORDER BY “START_TIME” DESC

Resetting Connections

Find the connection

SELECT CONNECTION_ID, IDLE_TIME

FROM M_CONNECTIONS

WHERE CONNECTION_STATUS = ‘IDLE’ AND CONNECTION_TYPE = ‘Remote’

  ORDER BY IDLE_TIME DESC

Disconnect Session

ALTER SYSTEM DISCONNECT SESSION ‘203927’;

ALTER SYSTEM CANCEL SESSION ‘237048’;

Find owners of objects

SELECT * FROM “PUBLIC”.“OWNERSHIP” WHERE SCHEMA=‘SCHEMA’


Find Granted Privileges for Users

SELECT * FROM PUBLIC.GRANTED_PRIVILEGES

WHERE GRANTEE_TYPE = ‘USER’ AND GRANTOR = ‘NAIRV’

PASSWORD Policy

Disable password policy on a user, this is used when you don’t want the policy to be applied on a user. This will set to lifetime.

ALTER USER USER DISABLE PASSWORD LIFETIME

Audit Policy

Configure

Enable global auditing

alter system alter configuration (‘global.ini’,

       ‘SYSTEM’)

set (‘auditingconfiguration’,

       ‘global_auditing_state’ ) = ‘true’ with reconfigure;

Set the auditing file type

alter system alter configuration (‘global.ini’,‘SYSTEM’)

set (‘auditingconfiguration’

,‘default_audit_trail_type’ ) = ‘CSVTEXTFILE’

with reconfigure;

aduit target path

alter system alter configuration (‘global.ini’,‘SYSTEM’)

set (‘auditingconfiguration’

,‘default_audit_trail_path’ ) = ‘path’

with reconfigure;

Find the policy implemented

Select * from public.audit_policies;

To enable/ disable global auditing

— change the configuration for setting the audit

alter system alter configuration (‘global.ini’,

       ‘SYSTEM’)

set (‘auditingconfiguration’,

       ‘global_auditing_state’ ) = ‘true’ with reconfigure;

Add audit policy

CREATE AUDIT POLICY Audit_EDW_DM_DROPTABLE_H00 AUDITING SUCCESSFUL DROP TABLE LEVEL CRITICAL;

Policy enable/disable

ALTER AUDIT POLICY Audit_EDW_DM_DROPTABLE_H00 ENABLE;


CHECK AFL PAL FUNCTIONS ARE INSTALLED

SELECT * FROM SYS.AFL_FUNCTIONS WHERE PACKAGE_NAME=‘PAL’;


Assigned Tags

      29 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Krishna Tangudu
      Krishna Tangudu

      Thanks for Sharing Vinod

      Regards,

      Krishna Tangudu

      Author's profile photo Raj Kumar S
      Raj Kumar S

      Finally you started sharing. Good to see this Vinod.

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thanks Raj, have to get to this space and manage my petty notes here 🙂 . better late than never rte...

      - Regards,

      Vinod Nair

      Author's profile photo Former Member
      Former Member

      It is very nice.

      I would suggest little correction for section

      Find which node is active

      SELECT HOST,PORT,CONNECTION_ID FROM M_CONNECTIONS WHERE OWN = 'TRUE';

      It will tell which node your session is connected to and not anything else. Useful when you need to know your session connection_id.

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thanks Lin! updated.

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      The connection will be to the active node/ master. this query will help in identifying the master node at the given time. it will be useful on the fail over scenarios when you are maintaining distributed/multi node HANA servers

      Author's profile photo Former Member
      Former Member

      Nope. The connection can be routed to any active node not only to a master. And eventually it can be rerouted to other less used node or even rerouted directly to a node on which table is managed by.

      Author's profile photo Former Member
      Former Member

      Where did you find UPDATE SCHEMA.COLUMN_STATISTICS MERGE DELTA INDEX; ?


      Per documentation delta merge statement is

      MERGE DELTA OF TABLE_NAME

      Optional

      WITH PARAMETERS('SMART_MERGE' = 'OFF');

      Table to look on how long it to merge

      SELECT * FROM SYS.M_DELTA_MERGE_STATISTICS where LAST_ERROR might be 2482 which means there was nothing to merge or just SMART merge decided it is not a good time to do it 🙂 based on smart_merge_decision_func.

      Also MOTIVATION column could be AUTO, SMART, HARD,MEMORY, CRITICAL, FORCED.

      It is even possible to watch merge progress in M_JOB_PROGRESS table.

      Author's profile photo Stefan Seemann
      Stefan Seemann

      Handy collection indeed.

      Instead of

      where schema_name = 'SCHEMA'

      you can use

      where schema_name = CURRENT_USER

      This sets automatically the schema name of the connected user.

      Author's profile photo Former Member
      Former Member

      Hi All,

        Execute immediate('COMMIT');

        MERGE DELTA OF <TABLE NAME> WITH PARAMETERS ('MEMORY_MERGE' = 'ON');

      Finally after all updates done. running the following command.

      MERGE DELTA OF <TABLE NAME>

      This sequence gives even better performance in Large data loads with Delta merge data in memory not growing huge.

      Hopefully this helps everyone.

      PK

      Author's profile photo Lars Breddemann
      Lars Breddemann

      Sorry, but can't agree.

      Generally speaking, the delta merge process should be left to it's own devices.

      Things like mass data loading optimization, delta merge fine tuning and transaction handling in stored procedures - these all belong to a rather specific and currently popular use case in SAP HANA: rebuild basic data warehousing functionality in SAP HANA SQLScript.

      This surely does not apply for every HANA user.

      - Lars

      Author's profile photo Former Member
      Former Member

      Lars --  Yes I agree it won't apply to every user.

      When left it to HANA it was sitting for more than 6 hours before the merge is completed.

      When doing an initial load with huge volume, waiting for the Auto merge would not be an option as we see the Delta Merge size growing exponentially.

      My steps are only for people with advanced knowledge about the internals of Merge options. My steps are based on case study we did internally and found these steps as a last resort.

      Hopefully this helps.

      Regards

      PK

      Author's profile photo Justin Molenaur
      Justin Molenaur

      I think there is auto merge configuration that could solve your merging issues as described. I ran some detailed tests to really determine if it was valuable to manually manage the merge process (Lars can attest!), and in the end it was better to let auto merge handle it.

      If you have initial load tables not merging for 6 hours, there had to be another underlying issue.

      Do you have any specific performance results to back up your conclusion? Just curious to share findings as we had seen the opposite.

      Regards,

      Justin

      Author's profile photo Former Member
      Former Member

      Hi Justin,

      We had two issues which prompted us to use this approach.

      1. Delta merge was growing huge during initial loads and we are running into 90+ percent utilization of memory.

      2. We were loading 5 years of Financial data in one go and wanted to make sure we commit after every year.

      we felt our approach would solve both the problem rather than waiting for auto merge to happen.

      Regards

      PK

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Hi Justin, I too had run into the issue last year with revision 62 on a customer site. We had nightly batch loads to HANA with 3 to 5 MM delta. the batch started slowing down & ran into memory alerts and figured the delta merge as the root cause. we had to add delta merge script in the post script of DS for each table as a fix. at that time the Delta merge function configurations were all default. Not had a chance to look at this in latest revision.

      -Thanks

      Vinod Nair

      Author's profile photo Former Member
      Former Member

      Thanks Vinod. That is exactly what we are also doing in our project too.

      Regards

      PK

      Author's profile photo Justin Molenaur
      Justin Molenaur

      Interesting points, just want to ask a few follow ups here. Not challenging your approach, just trying to understand for the benefit of the group. Looks like you guys are taking slightly different approaches to similar problems.

      @ PK and Vinod

      - Were the tables you were loading non-partitioned or had you partitioned them prior to the loading activities? I have seen extreme memory usage issues where huge tables that were not partitioned had to merge the entire dataset every time. After partitioning, much smaller merges were happening and exponentially less time and memory was required for the merge process.

      @ PK

      - 'MEMORY_MERGE' parameter eliminate the need for writing the changes to persistence until the merge is triggered as you show. So you are just eliminating the I/O during the actual merge process it seems. Could be problematic in recovery scenario?

      - 'Loading 5 years at one go, making sure to commit after each year' - you mean merge after each year is loaded, right? The data is committed technically as soon as the load completes. What kind of volumes are these?

      @ Vinod

      - In your case, are you disabling automerge on the tables in question and then triggering the merge after the loads complete?

      Regards,

      Justin

      Author's profile photo Former Member
      Former Member

      Hi Justin,

      In my case the tables are not partitioned(not my choice Client decided not to do do at this point).

      Yes you are absolute right I am disabling the writes to I/O by issuing MEMORY_MERGE ON. All these steps are done only to do the initial load and we as a team decided to take the risk. Yes we wanted to make sure we load data in chucks of every year and commit immediately after every year. The reason for issuing EXEC('COMMIT') was to free up the memory in Delta. If not issued the delta merge is still growing but not at exponential rate.

      Once the initial load is done we are moving back to auto merge option.

      Regards

      PK

      Author's profile photo Justin Molenaur
      Justin Molenaur

      Thanks for the info, this helps.

      Keeping what you said in mind, do you expect this table to grow over time after the initial load or is the initial load the final dataset?

      If you didn't partition the table, the auto merge will still be very expensive going forward if new data is arriving in the table. You are going to hit the same exact problem as the it grows, the merge time/resources will increase to a point where it is unmanageable.

      I would advise considering your partitioning options to mitigate that risk. That would also help with any future initial loads too, since no manual process would be needed.

      Regards,

      Justin

      Author's profile photo Lucas Oliveira
      Lucas Oliveira

      Hi All,

      There's a neat set of SQL statements in collective Note 1969700 - SQL statement collection for SAP HANA. The Note is surely complementary to the information described in this post.

      Cheers,

      Lucas de Oliveira

      Author's profile photo Former Member
      Former Member

      Hi Nair,

      regarding Garbage Collection,

      your advise was  to run this below command,

      mm gc –f

      Isn't this be a automated thing by setting up the below parameter in the indexserver ini file?

      mvcc_aged_checker_timeout

      Appreciate for your thoughts.

      Thanks

      Srikanth M

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Hi Srikanth, I agree, the command I had was from very previous revisions SP4 or SP5 (no sure now) where I had run into memory & thread management issues & had to manually call GC for some troubleshooting. You dont have to do any on this now on recent releases as HANA does a good job on that and let it manage this automatically.

      Thanks,

      Vinod

      Author's profile photo Former Member
      Former Member

      Many Queries at one location..Thanks for all these.. Add some more about adding XS engine..checking the ports which is used by tenant DB's etc..

      Author's profile photo Anindya Bose
      Anindya Bose

      very good article. thanks for sharing

      Author's profile photo Former Member
      Former Member

      This is a wonderful collection of useful commands!

      Is there a way to find out what tables are referenced (used) the most from views?  Example, if we have MARA table, we want to know if it used by 20 views or 200 views.  Any help with this would be great.  Thank you.

      Author's profile photo Former Member
      Former Member

      Very useful commands. Appreciate your effort putting them together.

      Author's profile photo Antonio Dantas
      Antonio Dantas

      Dear Experts

      Please,

      We having Sap netweaver with Hana Database and not acess by Hana Studio. My questions It's possible perform this SQL COMMANDS in attached by TCODE Dbacockpit SQL Editor ?

      many thanks for help 

      Author's profile photo Michael Healy
      Michael Healy

      Please check the SAP Note 1969700.

      You can run these SQL's from the DBACOCKPIT

      Author's profile photo Antonio Dantas
      Antonio Dantas

      Dear Michael,

      I look this Sap note but this scripts having more comments i want only sql command .

      ny ideal about having this sql commands ?

      Thanks for help