How to know the DB2 connection port

Maybe there are other methods, in this short article, a simple way to know the port that serves DB2 server.

We get the name of the service TCP / IP:

> db2 get dbm cfg | grep SVCENAME

Capture the result:

TCP/IP Service name (SVCENAME) = db2TRP

Look at /etc/services:

> cat /etc/services | grep sapdb2QRP

db2TRP 5912/tcp # DB2 Communication Port

 

The listening port is 5912!

Translation of terminology Oracle - DB2 LUW

With 9.7 DB2 LUW version, IBM makes a nod to all Oracle DBAs, much more numerous in the DB2 market.

For this reason, 9.7 version has introduced Oracle compatibility modes that let you perform tasks in DB2 with the ease and knowledge that all Oracle DBAs have. However, it is important to know the terminology's translation between Oracle and DB2 if you intend to get into the DB2 world.

In this first article, I relate a number of items from which this introduction is simple and can be read DB2 documentation easily, including general terminology, updates, utilities, and views.

 

Defragment to optimize MySQL tables

 

In MySQL, when you delete records from a table, space is reallocated automatically. It is as empty space and forming new attachments will advantage.

The problem is that if a table perform many DELETE operations, the physical space of the table will become increasingly fragmented and the performance is reduced.

In the MyISAM and InnoDB, OPTIMIZE TABLE command available to perform an optimization on any table that, among other things, performs an automatic defragmentation of the table.

It is highly recommended to use this command regularly especially on tables that are more statements of disposal of records.

As a precaution, keep in mind that during implementation, of course, the table is blocked. You have to remember when you are going to use with large tables and busy.

Supersimples The syntax is:

 

OPTIMIZE [NO_WRITE_TO_BINLOG | LOCAL] TABLE mi_tabla1 [, mi_tabla2] ...;

To make a fragmented table optimization can be selected to have free space, probably as a result of DELETE statements:

Improve MySQL performance by adjusting some parameters

MySQL, like most database managers, can easily modify the parameters that control memory sizes engaged in certain tasks, resource utilization, concurrency limits, etc.

Properly adjusting these parameters can be obtained many performance improvements, especially if the server / s of the database is not about resources, and if the SQL optimization can not be improved more.

I've recently made some basic settings in a MySQL database, so I take this opportunity to explain some of the process I followed for those who seek an easy way to make a first optimization of parameters in the database. This is not to say that this is the best way to do just that to me has worked out well;)

The first comment that can be very helpful to look in phpMyAdmin to the sections 'Show information about MySQL runtime' and 'Show MySQL system variables', normally accessible from the home page of the application.

The first displays information and statistics collected and maintained the system since its beginning. Pay special attention to the values of variables that are displayed in red, and the advice provided to the right of these values.

Please refer to the variables used to determine the current values of the parameters that could later be modified to improve performance.

 

Update SQL Server table statistics dynamically throughout a database

In Oracle databases there is a table that allows to list all the tables in the database (table 'dba_tables') and we can use this 'dba_tables' to create maintenance scripts dynamically.  In SQL Server we can create also scripts of tables maintenace by querying the table [dataBase].dbo.sysobjects.

In the example below we use a T-SQL script to update statistics for all tables in a SQL Server database by querying dynamically the data dictionary (using the table dbo.sysobjects). This T-SQL code can be encapsulated in a stored procedure or in a job to be executed by the SQL Server Agent to automatically keep statistics updated on all tables of the dbo scheme in a SQL Server database.

Oracle 10g: Possible optimization in massive data dump

In batch runs to make a massive data dump into the same table using an INSERT or UPDATE for register within a block, the process can be optimized with the use of parameters (if client supports it) or if we use ODBC with bind variables.
Recall the steps taken by Oracle to process a query:
1) Sintactic Validation 
2) Semantic Validation
3) Optimization 
4) Generation of the QEP (Query Execution Plan)
5) Implementation of the QEP (Query Execution Plan)
Sentences can pick up the parameters by value (where salary > 1000) or once the sentence is compiled using Bind Variables (where salary>: b1). The advantage of the second option is that Oracle compile the sentence only one-time and reuses the compiled code for each of the values for the parameters.
But we must be aware because in the latter case because Oracle can't calculate the degree of selectivity of a query and, instead, apply a degree of selectivity by default (associated with each type of operation), which can give in wrong decisions.

Easily export data from Oracle to flat file

A simple way to export data from a query, table, etc.. of an oracle database to a flat file is to use the SPOOL command in SQLPlus. This would not need to rely on visual aids, which are not always available or do not always work as we want. Also you can use the Oracle format functions in the same SELECT statement that generated the data already in the format we need.

If, for example, we want to retrieve some data from all records in a table of customers sorted by date of discharge, simply open a SQLPlus session and run this series of commands: 

SQL> SET HEADING OFF
SQL> SET FEEDBACK OFF
SQL> SPOOL C:\datos_de_clientes.txt
SQL> SELECT 'Cliente ' || CLI_NOMBRE || ', ' || CLI_NIF || '. Fecha alta: ' || TO_CHAR(CLI_FECHAALTA,'YYYY-MM-DD')
FROM TABLA_CLIENTES
ORDER BY CLI_FECHAALTA DESC;
SQL> SPOOL OFF;
SQL> SET FEEDBACK ON
SQL> SET HEADING ON

The first lines hide the headers that contain the field name, and do not concern us because we only want the data. Spool directs the output of data to the file 'datos_de_clientes.txt' on the C drive on the local machine.