Saturday, March 15, 2008

BUFFERING IN SAP ABAP

Buffer Components:
Definition
An SAP buffer consists of the following parts:

Mode table

The mode table resides in shared memory and tells you which pool contains which shared memory areas. The mode table is part of the common information on the shared memory areas that are accessed by the work processes.

For example, SAP Key 1 with Mode = 0, instructs the OS kernel to extract this buffer from the default pool and to allocate a unique shared memory segment. SAP Key 10 with Mode = pool size instructs the OS kernel to store the buffer specifically in pool 10. SAP Key 11 with Mode = -10 means that the buffer is located in pool 10.
SAP Global Management Table A shared memory area that is allocated by the dispatcher during system startup.

When semaphore protection is on, the SAP Global Management Table is addressed exclusively by SAP Shared Memory Management. This is a central agent that is found in each work process and that sets up a shared memory area for the local application server or instance. The SAP Shared Memory Management issues a call to the operating system (OS) when it creates a shared memory area.


As a result, the SAP key is assigned to an OS key. The OS returns a unique identifier (handle) for the shared memory area, with which the SAP Shared Memory Management addresses the shared memory area that was created by the OS. All work processes in the SAP System can access the SAP Global Management Table. The handle can be accessed by all work processes.


Address Table Every work process contains this table. Assigns virtual addresses to the physical addresses of the shared memory areas.


Shared Memory Objects These include the buffers, for example.
Header Contains information on the shared memory area (also called memory segment). If a write error occurs outside the segment area, then the uniformity of the header is destroyed. The control function of the SAP Management of Shared Memory checks the consistency of the headers.


ID Identifies the memory area. The ID is assigned when a SAP Shared Memory Management user requests the memory area.
Storage Class The memory class. Examples of memory classes: permanent (local), shared, roll, paging and short.
Subdivision A mark for the requested area that can be referred to later when you release the memory area.



Definition


The name table (nametab) contains the table and field definitions that are activated in the SAP System. An entry is made in the Repository buffer when a mass activator or a user (using the ABAP Dictionary, Transaction SE11) requests to activate a table. The corresponding name table is then generated from the information that is managed in the Repository.


The Repository buffer is mainly known as the nametab buffer (NTAB), but it is also known as the ABAP Dictionary buffer.


The description of a table in the Repository is distributed among several tables (for field definition, data element definition and domain definition). This information is summarized in the name table. The name table is saved in the following database tables:


• DDNTT (table definitions)
• DDNTF (field descriptions)


The Repository buffer consists of four buffers in shared memory, one for each of the following:


Table definitions TTAB buffer Table DDNTT
Field descriptions FTAB buffer Table DDNTF
Initial record layouts IREC buffer Contains the record layout initialized

depending on the field type


Short Nametab SNTAB buffer A short summary of TTAB and FTAB buffers
The Short nametab and Initial record layouts are not saved in the database. Instead, they are derived from the contents of tables DDNTT and DDNTF.


When access to a table is requested, the database access agent embedded in each work process first reads the Short nametab buffer for information about the table. If the information is insufficient (for example, the SELECT statement uses a non-primary key) it accesses the Table definitions buffer and then the Field descriptions buffer. By reading the Repository buffers, the database access agent knows whether the table is buffered or not. Using this information, it accesses the table buffers (partial buffer or generic buffer) or the database.


The IREC buffer is read:


• When a REFRESH command is executed in an ABAP program
• At an INSERT command, when a record is created in the buffers before the data is inserted and the fields are initialized with the values found in IREC buffer
You can set the buffers mentioned above by editing the parameters in the instance profile




There are two kinds of table buffers:
• Partial table buffers
• Generic table buffers


Use


The table below displays these table buffers and their functions.



Whether a table is partially buffered, generically buffered, or fully buffered depends on its attribute settings. You can change the buffer attributes of a table using Transaction SE13.

Definition

The following table displays the program buffer and its functions.
Buffer Also known as Function
Program buffer SAP executable buffer ABAP buffer PXA (Program Execution Area) Stores the compiled executable versions of ABAP programs (loads). The contents of this buffer are stored in tables D010L (ABAP loads), D010T (texts) and D010Y (symbol table).

The program buffer has a hash structure and supports LRU (Least Recently Used) displacement.
You can reconfigure the program buffer by adjusting its instance profile parameters.


Definition

There are two kinds of SAPgui buffers:

• Presentation buffers
• Menu buffers

The following table shows the SAPgui buffers and their functions:

Roll and Paging Buffers, Extended Memory

Definition

The roll and paging buffers are the preferred working area of the roll and paging areas for an instance (application server). The remaining area is located on disk as roll and paging files. The user context is stored in the extended memory and the roll area (when the job is "rolled out" of a work process). The paging area stores special data for the ABAP processor, while the extended memory stores a large portion of the internal tables of a program.

You set the roll and paging buffers, as well as the extended memory using the parameters in the instance profile

SAP Calendar Buffer

Definition

The SAP calendar buffer stores all defined factory and public holiday calendars.
Calendars are stored in the database tables TFACS and THOCS.
The buffer has a directory structure. This means that if the shared memory is configured too small, only the required data is loaded; there is no LRU displacement of the contents of the buffer.

You can change the calendar buffer by editing the parameter in the instance profile

SAP Cursor Cache

Definition

The SAP cursor cache helps to improve system performance by reducing the number of parsing of SQL statements; it is database-dependent. The SAP cursor cache is only slightly different for Oracle, Informix and SAP DB. It is totally different for AS/400 and MS SQL Server.
There are two types of cursor caches:
• Statement ID cache
• Statement cache

Changing the SAP cursor cache parameter value in the default profile will affect other areas as well. You are therefore advised not to tune it without the recommendation of a qualified SAP expert.

Statement IDs and the Statement Analyzer
The source of each SQL statement in the SAP System (ABAP, DYNP, the C modules of the database interface) assigns an ID to its Open SQL / Native SQL etc. statement. The statement ID includes:

• Module name (report name)
• Statement number (line number)
• Timestamp (time of ABAP generation)

The statement ID provides an easy way to recognize statements. There may be different statement IDs for one statement (for example, different ABAP programs doing the same SELECT ). The Statement Analyzer eliminates such duplicities. When it receives an SQL statement (in control block form), this database interface module checks if the statement is simple (for example, SELECT * FROM T100 WHERE... =... AND... =... ), or complex (for example, SELECT * FROM T100 WHERE... <... AND... >... ). If the statement ID is simple, the Statement Analyzer assigns a ‘normalized’ statement ID.
The analyzer is called by the RSQL or Open SQL interface. If it is able to assign a normalized ID, the original ID (if existing) is replaced.

SAP LOCK CONCEPT:

If several users are competing to access the same resource or resources, you need to find a way of synchronizing the access in order to protect the consistency of your data.

Example: In a flight booking system, you would need to check whether seats were still free before making a reservation. You also need a guarantee that critical data (the number of free seats in this case) cannot be changed while you are working with the program.

Locks are a way of coordinating competing accesses to a resource. Each user requests a lock before accessing critical data.

It is important to release the lock as soon as possible, so as not to hinder other users unnecessarily.

Whenever you make direct changes to data on the database in a transaction, the database system sets corresponding locks.

The database management system (DBMS) physically locks the table entries that you want to change (INSERT; UPDATE, MODIFY), and those that you read from the database and intend to change (SELECT SINGLE FROM FOR UPDATE). Other users who want to access the locked record or records must wait until the physical lock has been released. In such a case, the ABAP program waits until the lock has been released again.

At the end of the database transaction, the database releases all of the locks that it has set during the transaction.

In the R/3 System, this means that each database lock is released when a new screen is displayed, since a change of screen triggers an implicit database commits.

To keep a lock set through a series of screens (from the dialog program to the update program), the R/3 System has a global lock table at the application server level, which you can use to set logical locks for table entries.

One application server contains this lock table and a special enqueue work process, which administers all requests for logical locks in the R/3 System. All logical lock requests of the R/3 System run using this work process.

You can also use logical locks to "lock" table entries that do not yet exist on the database (inserting new lines). You cannot do this with physical database locks.

For further information, see the ABAP Editor Keyword documentation for the term Locking.

Logical locks are generated when an entry is written in the lock table. You use function modules to do this.

You can only set a lock if the relevant table entry is not already locked.

The SAP transaction receives information on the success of a lock request from a return code sent via the EXCEPTION interface of the function module. In other words, the control is returned to the program using the function module. The ABAP program does not need to wait.

The SAP transaction can react appropriately by analyzing the return code.

Another user cannot gain access to work with the same table entries that are already locked.

Depending on the bundling technique in use for database updates), the program must delete the lock entries it generated using a lock module, or have them deleted indirectly (see unit Organizing Database Updates).

If the user terminates the program that generated the lock entries (usually a dialog program), the locks are released automatically (implicitly). You can do this by entering /n in the command field, or with the statements LEAVE PROGRAM, LEAVE TO TRANSACTION, and 'A' or 'X' Messages.

When you call an ENQUEUE function module, the dialog program tries to generate a lock entry.

The export parameters identify the table entry (or entries) that you want to lock.
The program that generates the locks (usually dialog program) analyzes the return code for lock requests and reacts accordingly.

If the lock could not be set; you should normally output an error message.

At the end of the dialog program, you can use the corresponding DEQUEUE function module to delete the entries from the lock table.

DEQUEUE function modules have no exceptions. If you try to release an entry that is not locked, this has no effect.

If you want to release all of the locks that you have set, at the end of your dialog program, you can use the function module DEQUEUE_ALL.

The lock table contains the lock arguments for each table (for lock arguments, see the following slide).

To display the lock table, use transaction SM12.

The entries in the lock table are standard. Locks are always set using the values of the key fields in a table. These form the lock argument.

You pass the values for the lock argument to the lock modules via their interface (function module IMPORT parameters).

If you fail to set any of these parameters, the system interprets it generically, that is, the lock is set for all table lines that meet the criteria specified in the other parameters. The client parameter is an exception to this rule, where the default client SY-MANDT applies.

Lock entries must be assigned to a lock mode.

There are three different lock modes:

Mode 'E' for write locks: This is set if you want to write data to the database (change, create, or delete).


Mode 'S' for read locks: This is set if you want to ensure that the data, which you are reading from the database in your program, is not changed by other users while the program is running. You do not want to change the data itself in your program.

Mode 'X' for write locks: Like mode 'E', mode 'X' is used for writing data to the database. The technical difference between mode 'X' and mode 'E' is that locks of mode 'X' are not accumulated while a program is being executed. (For further details, see the following pages).

If someone tries to lock the same data record again with a second program (different user), the various lock modes take effect as follows:

Write locks ('E' or 'X') mean that any lock attempts from other users are refused, irrespective of the mode in which the lock is attempted.

If a data record is locked in mode 'S' (shared), further locks in mode 'S' may be set by other users.

Lock attempts in other lock modes ('E' or 'X') are refused.

If you want to try to lock a data record more than once while a program is running (for example using a function module that you call up, which sets locks itself), the lock system reacts in the following way:

Mode 'E' write locks are not refused. Instead, a cumulative counter is incremented. The same applies to read locks (mode 'S').

If a data record is locked in mode 'E', a lock request generates a second lock, which is marked as a read lock.

If a data record is locked in mode 'S' and no further read locks are set by other users, a lock attempt in mode 'E' is possible. This generates a second entry in the lock table (for mode 'E').

If a data record is locked in mode 'X', all further lock requests are refused.

If you want to ensure that you are reading up-to-date data in your program (with the intention of changing and returning this to the database), you should use the following procedure for lock requests and database accesses in your program:

First, lock the data that you want to edit.

Then read the current data from the database.

In the next step, process (change) the data in your program and write this to the database.

In the final step, release the locks that you set at the beginning.

This procedure ensures that your changes run fully with lock protection and that you only read data that has been changed consistently by other programs (provided that these also use the SAP lock concept and follow the procedure described here).

Lock modules are created for lock objects and not tables.

Lock objects are maintained in the dictionary. Customer lock objects must begin with "EY" or "EZ".

A lock object is a logical object composed of a list of tables that are linked by foreign key Relationships. Lock modules are generated for these objects and enable common lock entries to be set for all tables contained in the lock object. This allows combinations of table entries to be locked.

The list of tables for a lock object consists of a primary table. Further table entries are referred to as secondary tables. Only tables with foreign key relationships to the primary table can be used as secondary tables.

With lock objects, you can assign different names for the parameters that describe the fields of the lock arguments for the lock modules. The names of the table fields (key fields of the tables) are proposed by the system.

You can specify the lock mode (a write lock 'E' or 'X' or a read lock 'S') for each table. These function as default values for the lock modules.

After you have assigned tables and default lock modes, lock objects must be generated.

When you activate a lock object, the system generates an ENQUEUE and a DEQUEUE function module.

These have the names ENQUEUE_ and DEQUEUE_ respectively.

If you want to ensure that you are reading current data in your program (with the intention of changing and returning this to the database), you should use the following procedure in your program for lock requests and database accesses:

1. Lock the data that you want to edit.
2. Read the current data from the database.
3. Process (change) the data in your program and write this to the database.
4. Release the locks that you set at the beginning.

This procedure ensures that your changes run fully with lock protection and that you only read data, which has been changed consistently by other programs (with the restriction that these are also using the SAP lock concept and following the procedure described).

If you change the order of the four steps to Read -> Lock -> Change -> Unlock, you run the risk that the data read by your program will not be up to date. Your program can read data before another user's program writes changes to the database. This means that a user of your program will make decisions for entries that are not based on up-to-date data from the database. For this reason, you should always follow the recommended procedure.

Requesting a lock from a program is a communication step with lock administration. The Communication step requires a certain time interval. If your program sets locks for several objects, this interval occurs more than once.

By using so-called local lock containers, you can reduce these communication intervals with lock administration. To do so, collect the required lock requests of your program and send them together to lock administration.

The locks (delayed execution) can be collected when the lock modules are called. For this purpose, qualify the IMPORT parameter_collect with 'X'. The data transferred via the lock module interface is then registered in a list (lock container) as a lock request that needs to be executed.

The lock container can be terminated using the FLUSH_ENQUEUE function module and sent to lock administration.

When the lock orders of a lock container can be executed, the lock container is deleted.

If one of the locks in a container cannot be set, the function module FLUSH_ENQUEUE triggers the exception FOREIGN_LOCK. In this case, none of the registered lock requests is executed. The registered locks remain in the lock container.

You can delete the contents of an existing lock container with the function module
RESET_ENQUEUE.

The specified function modules have release status internally-released.

No comments:

Blog Archive