You are here

Siebel Admin

How Should Client Side Logging Be Set?
Using the SIEBEL_LOG_EVENTS environment variable, you can set what kind of information is collected. You should set this environment variable temporarily while troubleshooting any behaviors. When running the Siebel client in a production environment, you should not set this environment variable because it will affect performance and disk space. To set this environment variable, follow the instructions below:
 
  1. Open a Command Window and change the current directory to the <ClientRootDir>\bin directory.
 
  1. Type the command as shown below.
 
set SIEBEL_LOG_EVENTS=<event level>
 
NOTE:  <event level> can be set between the values of 0 and 5. The higher the level, the more detailed information is collected. With the default value of 1, minimal information is collected in the log file. Set the event level to 3 or higher to troubleshoot a behavior, or if instructed by Siebel Technical Support. Please note that the higher the event level is set, the more disk space is consumed. When setting the event level to a higher value you should monitor that sufficient disk space is available.
 
  1. Launch the Siebel client application specifying the following parameters from within the same Command Window. Here is an example for launching the Siebel client with the parameters; user=USERNAME, password=PASSWORD and login to the local database.
 
siebel /u USERNAME /p PASSWORD /d local
 
 
This environment variable can also be set permanently.
 

Windows XP

  1. Choose Start > Control Panel > System
  2. Click the Advance tab, and then click Environment Variables.
  3. In the User variables section, click New to create a new environment variable and enter the following values
Variable Name = SIEBEL_LOG_EVENTS
Variable Value = 3

      4. Launch the Siebel client application and the new setting will become effective.

 
There are two types of log file name convention for Siebel version 6.x and later:
 
  1. The log file name is, by default, <program>.log. (for example, Siebel.log or  UpgWiz.log).
 
  1. The log file name is created based on the Process and Task Id (in other words, Syncthrd_nnn_yyy.log) where nnn specifies the process-id and yyy specifies the task-id. The file created with this naming convention is typically when a remote user starts the synchronization process with the Siebel Server.
 
 
For log files that follow the naming convention described above under item 1, each execution of the program creates a new log file, <program>.log, while archiving the previous versions as <program>_1.log, <program>_2.log, etc, pushing down the numbers in an increasing order chronologically. By default, only 10 archived log files are retained and the oldest log file is deleted. You can change this number by running the following command:
 
set SIEBEL_LOG_ARCHIVES=<archive_number>
 
where archive_number should be a positive integer and determines how many log files will be saved. You can also set SIEBEL_LOG_ARCHIVES as an environment variable similar to how you set the SIEBEL_LOG_EVENTS described above.
 
Log files that follow the naming convention described above under item 2 will not be deleted regardless of what the value is set for the SIEBEL_LOG_ARCHIVES variable.
 
 
The location of the log file of a Siebel client application is determined by the following factors.
 
If SIEBEL_LOG_DIR is set, for example:
 
set SIEBEL_LOG_DIR=<dir>
 
The log file will be created in the directory specified. Make sure this directory already exists and you have write permissions in that directory. You can also set SIEBEL_LOG_DIR as an environment variable similar to how you set the SIEBEL_LOG_EVENTS described above.
 
If no SIEBEL_LOG_DIR is specified, the log file will be created in the <ClientRootDir>\log directory.
Tracing Siebel server task with SQL Trace and TKPROF in Oracle

tkprof is one of the most useful utilities available to Admins for diagnosing performance issues. It essentially formats a trace file into a more readable format for performance analysis. The DBA can then identify and resolve performance issues such as poor SQL, indexing, and wait events.

Why, When tkprof?

Use tkprof and session tracing when the database or a particular session is having performance problems. tkprof will generally be used infrequently, when researching a very particular performance issue. A user may complain that the response time for a session is abysmal compared to the prior week. Session tracing and tkprof can be used to see exactly what is happening on the database, enabling the DBA to take corrective action.

The utility can also be used to view SQL that is being executed for an application. In some situations, this will be the only mechanism a DBA will have to view SQL. These situations include the execution of encrypted PL/SQL code on the database or submission of SQL statements from third party applications.

Identifying the Siebel Server Task

To help diagnose problems that occur with Siebel server tasks it may be necessary to get an RDBMS trace. To perform such a trace on a server task on Oracle RDBMS, carry out the following:

1.First of all identify the process id of the Siebel server task. This can be done from:

Server Administration > Server Tasks (in version 5.x, 6.x)

Server Administration > Tasks (in version 7.x)

Administration – Server Management > Tasks (in version 8.0).

There is a PID column that identifies the process id for the server task. Make a note of the PID identified.

2.Log on to the Siebel database with SQL*Plus as a user with permissions to query the V$ tables for example DBA or SYSTEM and run the following SQL statement:

SELECT SERIAL#, SID
FROM V$SESSION
WHERE PROCESS LIKE 'PID:%'

Where PID is the PID identified in step 1. Make a note of the SID and SERIAL# pairs identified as there may be more than one Oracle session per process.

3.Then start SQL tracing for each SID and SERIAL# pair identified in step 2:

exec sys.dbms_system.set_sql_trace_in_session (SID, SERIAL#, true);

Tracing for this session is now on and Oracle writes to the user dump destination specified by the USER_DUMP_DEST initialization parameter. To turn the trace off:

exec sys.dbms_system.set_sql_trace_in_session (SID, SERIAL#, false);

4.Finally format the trace file with the TKPROF utility. TKPROF accepts as input a trace file produced by the SQL trace facility, and it produces a formatted output file. Run TKPROF on each individual trace file. For example:

TKPROF ora53269.trc ora53269.prf SORT = (EXECPU, FCHCPU)

The SORT value causes TKPROF to sort the SQL statements in order of the sum of the CPU time spent executing and the CPU time spent fetching rows before writing them to the output file.

Database Performance Considerations - Where indexes can improve performance and where not

In a nutshell a database index is an auxiliary data structure which allows for faster retrieval of data stored in the database. They are keyed off of a specific column so that queries like “Give me all people with a last name of ‘Sam’” are fast.

Indexes don’t come for free. What you gain for in retrieval speed you lose in insertion and deletion speed because every time you alter a table the indexes must be updated accordingly. If your table is updating frequently it’s possible that having indexes will cause overall performance of your database to suffer. There is also a space penalty, as the indexes take up space in memory or on disk. A single index is smaller than the table because it doesn’t contain all the data, only pointers to the data, but in general the larger the table the larger the index

(1) When to Index

- Columns frequently used in search or query expressions
- Columns used to join tables
- High-selectivity keys. The selectivity of an index is the percentage of rows in a table having the same value for the indexed key. An index's selectivity is optimal if few rows have the same value.
- Foreign keys

(2) When do not Index
- Columns and expressions with few distinct values
- Frequently updated columns

Consider indexing foreign keys of referential integrity constraints, since this column is commonly used to join both tables. Indexing foreign key columns helps avoid full table scans while searching for matching rows in the child when DML is performed on the parent.

When choosing to index a column (or group of columns), consider whether the performance gain for queries is worth the performance loss for massive INSERTs, UPDATEs, and DELETEs and the quantity of space required to store the index. You might want to experiment by comparing the processing times of the SQL statements with and without indexes. You can measure processing time with the SQL Trace facility.

Finally, the best way to know if you need to index or not is generating the execution plan for the query. Analyzing it you can determine if an index should be or not created.

Siebel Administration - Server Management Views inaccessible when ADSI authentication is enabled.
Oracle Siebel introduced Gateway authentication in Siebel 8 onwards, which mandates the database login authentication for accessing the Siebel Administration – Server Management view. However before 8.1.1.3 users were able to access the Server Manager view, As a result of a change in 8.1.1.6. When you upgrade your Servers to Siebel 8.1.1.6 you will not be able to access the server manager views using an ADSI authenticated login. As a result of this change, users attempting to use ServerMgr related components and views are validated against the database even though the primary authentication for the application object manager is ADSI or LDAP. If the object manager is configured to used LDAP authentication, the Gateway Server should be also configured to use LDAP authentication, or alternatively a database authenticated URL should be used by the Siebel Administrators to perform administrative tasks on Server Admin views. Object manager log will show error as below: SBL-DAT-00712: Unable to retrieve credential string from user 'LDAP://ltdc1.testld.com/CN=ADSIuser,OU=MYDOMANIN,DC=testld,DC=com' in Active Directory. SBL-SCM-00018: Could not open connection to Siebel Gateway configuration store (DEVAPP:2320). Solution: A solution for this is to have Administrative users wishing to access Server Manager related views through an LDAP or ADSI authenticated application object manager will need to have a database account with the same user ID and password as their LDAP or ADSI account.
Backing up the siebens.dat

In order to avoid trouble arising from corrupted siebns.dat files, we should create backup copies regularly and before each major modification of the Enterprise configuration.

The following command at the srvrmgr prompt allows us to create a backup copy of siebns.dat and specify a name for the new file. If the name is omitted, the file name will contain the current timestamp.

backup nameserver [filename]

How to Synchronize Siebel Batch components from srvrmgr command line utility
Most of the Siebel administrators use Siebel Server Manager command line utility or tool, commonly referred as srvrmgr. Ever wondered how to synchronize the batch components after creating the Siebel components. 1. Synchronize batch components from the command line After a fresh install of a Siebel database and after creating or modifying batch component definitions, administrators must ensure that the data residing in the siebns.dat file (managed by the Siebel Gateway Name Server) is synchronized with data in the S_SRM_ACTION and S_SRM_ACT_PARAM tables in the Siebel database. This is typically accomplished by clicking the Synchronize button in the Synchronize view of the Administration - Server Configuration screen in the Siebel Web Client. For administrators who want to automate the task using a script, you can issue the following command at the srvrmgr prompt: Sync comps Or alternatively Synchronize components As a prerequisite you have to set the environment variable SIEBEL_DEBUG_FLAGS to a value of 16 (ideally as part of the script). Otherwise srvrmgr will just say "Synchronize is not available...". to set the SIEBEL_DEBUG_FLAGS, open the command prompt and type.. SET SIEBEL_DEBUG_FLAGS=16
Database Tuning Tips for EIM - DB2 UDB

Use DB2 load replace option when loading EIM tables and turn off table logging if possible.Use separate tablespaces for EIM tables and the base tables.Use large page sizes for the EIM and the larger base tables. A page size of 16KB or 32KB has been found to give good performance in practice. The larger page sizes allow more data to be fitted on a single page and also reduce the number of levels in the index B-tree structures.Similarly, use large extent sizes for both the EIM and the large base tables.

Consider using DMS containers for all Siebel tablespaces. Using raw devices or volumes will further help in improving performance.Ensure that the tablespace containers are equitably distributed across the logical and physical disks and the I/O controllers of the database server.Use separate bufferpools for EIM tables and the target base tables. Since initial EIM loads are quite large and there are usually no online users, it is recommended to allocate a significant amount of memory to the EIM and the base table bufferpools.Reorg tables if data on disk is fragmented. Use reorgchk utility with current statistics to find the fragmented table/index.

Periodically ensure that table and index statistics are collected. Do not use “RUNSTATS” with the “DETAILED” option.Use DB2 snapshot monitors to ensure performance is optimal and to detect and resolve any performance bottlenecks.Log retain can be turned "OFF" during the initial load. But remember to turn it back on before going live.For the EIM and the base tables involved, alter the table to set them to VOLATILE. This will ensure that indexes are preferred over table scans.

 
Consider the following setting for DB2 registry values
 
  • DB2_CORRELATED_PREDICATES = YES
  • DB2_HASH_JOIN = NO
  • DB2_RR_TO_RS = YES
  • DB2_PARALLEL_IO = “*”
  • DB2_STRIPPED_CONTAINERS = ON (when using RAID devices for tablespace containers)
 
Consider the following settings for the DB2 database manager configuration parameters
 
  • INTRA_PARALLEL = NO (may be used during large index creation)
  • MAX_QUERYDEGREE = 1 (may be increased during large index creation)
  • SHEAPTHRES = 100,000 (depends upon available memory, SORTHEAP setting, and other factors)
 
Consider setting the following setting for the database parameters
 
  • CATALOGCACHE_SZ = 6400
  • DFT_QUERYOPT = 3
  • LOCKLIST = 5000
  • LOCKTIMEOUT = 120 (between 30 and 120)
  • LOGBUFSZ = 512
  • LOGFILESZ = 8000 or higher
  • LOGPRIMARY = 20 or higher
  • LOGRETAIN = NO (only during initial EIM loads)
  • MAXLOCKS = 30
  • MINCOMMIT = 1
  • NUM_IOCLEANERS = Number of CPUs in the database server
  • NUM_IOSERVERS = Number of disks containing DB2 containers
  • SORTHEAP = 10240 (for initial EIM loads only, thereafter, during production run set it to between 64 and 256)
  • STAT_HEAP_SZ = 8000
Table Fragmentation and Siebel EIM

Prior to running EIM it is important to clean up fragmented objects, especially those that will be used during the EIM processing. The following SQL statement can be used to identify objects with greater than 10 extents:

SELECT segment_name,segment_type,tablespace_name,extents
FROM dba_segments
WHERE owner = (Siebel table_owner)
and extents > 9;

To fix fragmentation, the objects will need to be rebuilt with appropriate storage parameters. Always be careful when rebuilding objects because of defaults, triggers, etc. on the objects.

Optimizer Mode

Oracle optimization mode can also affect EIM performance. The Siebel application has been found to perform better under RULE based optimization under normal application usage. While there have been cases where it has been found that Cost based optimization has improved EIM performance this should only be attempted as a last resort and must be switched back to RULE for online usage. Be aware that Siebel only supports Rule-Based optimization. Optimization mode can be verified by running the following query:

SELECT NAME, VALUE FROM V$PARAMETER WHERE NAME = ‘OPTIMIZER_MODE’;

(must have DBA privilege)

What is FREELIST Parameter

Multiple EIM processes can be executed against an EIM table provided they all use different batches or batch ranges. The concern is that contention for locks on common objects may be experienced. To run multiple jobs in parallel against the same EIM table, check that the ‘FREELIST’ parameter is set appropriately for the tables and indexes used in the EIM processing. This would include the EIM tables and indexes as well as the base tables and indexes.

The value of this parameter specifies the number of block ids that will be stored in memory which are available for record insertion. As a rule of thumb, users should set this to at least half of the intended number of parallel jobs to be run against the same EIM table (example, a FREELIST setting of 10 should permit up to 20 parallel jobs against the same EIM table). This parameter is set at the time of object creation and the default for this parameter is 1. To check the value of this parameter for a particular object the following query can be used:

SELECT SEGMENT_NAME, SEGMENT_TYPE, FREELISTS
FROM DBA_SEGMENTS
WHERE SEGMENT_NAME=’’;

To change this parameter the object must be rebuilt. Again, be careful when rebuilding objects because of defaults, triggers, etc. on the objects. To rebuild the object follow the steps below:

1. Export the data from the table with the grants.
2. Drop the table.
3. Re-create the table with the desired FREELIST parameter.
4. Import the data back into the table
5. Rebuild the indexes with the desired FREELIST parameter.

Siebel Database Layout Guidelines

As with most client-server applications, the overall performance of Siebel eBusiness applications is largely dependent on the I/O performance on the database server. To ensure optimal I/O performance, it is critical that the tables and indexes in the database be arranged across available disk devices in a manner that evenly distributes the I/O load.

The mechanism for distributing database objects varies by RDBMS, depending on the manner in which storage space is allocated. Most databases have the ability to assign a given object to be created on a specific disk.

A redundant array of independent disks, or RAID, can provide large amounts of I/O throughput and capacity, while appearing to the operating system and RDBMS as a single large disk (or multiple disks, as desired, for manageability). The use of RAID can greatly simplify the database layout process by providing an abstraction layer above the physical disks while ensuring high performance. Regardless of the RDBMS users implement and the chosen disk arrangement, be sure that users properly distribute the following types of database objects:

Database log or archive files.

Temporary workspace used by the database.

Tables and Indexes:

In most implementations, the tables and corresponding indexes in the following list tend to be some of the more heavily used and should be separated across devices. Refer to the Siebel Installation Guide in Siebel Bookshelf for a more complete listing. In general, the indexes listed below should be on different physical devices from the tables on which they are created.

o S_ACCNT_POSTN
o S_OPTY
o S_ADDR_ORG
o S_OPTY_POSTN
o S_CONTACT
o S_POSTN_CON
o S_DOCK_TXN_LOG
o S_PARTY_REL
o S_PARTY
o S_SRV_REQ
o S_EVT_ACT
o S_OPTY
o S_ORG_EXT

For Organizations who plan on making extensive use of Siebel Enterprise Integration Manager (EIM), it is recommended to put the key EIM tables (based on the unique business requirements) and their corresponding indexes on different devices from the Siebel base tables and indexes, because all are accessed simultaneously during EIM operations.

Pages

Subscribe to Siebel Admin