Skip to main content

Posts

Showing posts from April, 2021

Foreign keys and missing indexes

I I think it is good idea to have foreign keys and also have matching indexes. Following script will take care of indexes you missed. It will show you which indexes are missing even for compound foreign keys. But you should check existing indexes just to be sure you are not creating useless index. WITH f AS ( SELECT t.table_name, t.constraint_name AS index_name, LISTAGG(t.column_name, ', ') WITHIN GROUP (ORDER BY t.position) AS fk_cols FROM user_cons_columns t JOIN user_constraints n ON n.constraint_name = t.constraint_name WHERE n.constraint_type = 'R' GROUP BY t.table_name, t.constraint_name ) SELECT f.table_name, f.index_name, f.fk_cols, i.index_name AS existing_index, i.cols AS index_cols, -- 'CREATE INDEX ' || RPAD(f.index_name, 30) || ' ON ' || RPAD(f.table_name, 30) || ' (' || f.fk_cols

APEX and locked user

I I have database account and it is locked. I have APEX application based on this schema. Am I able to change any object in SQL Developer (SQL Plus or whatever)? No. That's correct. Am I able to change any object in APEX thru SQL Workshop? Yes! I don't thing this is a correct behaviour. In APEX administration you can deny access to whole SQL Workshop to prevent this, but you also lose legit ability to run/explore anything there. I thing there is a missing functionality which should check locked account and if found then it should either show objects in Object Browser as read only or deny access just to Object Browser. And also fail all executed scripts thru this account. Maybe this could be a good motivation to always use shadow/proxy schema?

Cloud certifications for 2021 (updated)

T This article is updated version of article released on April 12th . If you want to pursue multiple certifications and you want also access to Oracle University materials you should consider buying whole Oracle Learning Subscription. You can check  complete list  of new exams. Oracle Cloud Infrastructure These certifications are updated and were previously included in  Oracle Cloud Infrastructure Learning Subscription : Oracle Cloud Infrastructure Foundations 2021 Associate ( 1Z0-1085-21 ) Oracle Cloud Infrastructure Developer 2021 Associate ( 1Z0-1084-21 ) Oracle Cloud Infrastructure 2021 Architect Associate ( 1Z0-1072-21 )* Oracle Cloud Infrastructure 2021 Architect Professional ( 1Z0-997-21 )* Oracle Cloud Infrastructure 2021 Cloud Operations Associate ( 1Z0-1067-21 ) Oracle Cloud Infrastructure 2021 HPC and Big Data Solutions Associate ( 1Z0-1089-21 ) Oracle Cloud Infrastructure 2021 Enterprise Workloads Associate ( 1Z0-1088-21 ) Oracle Cloud Platform Systems Management 2021 Speci

Fix constraint names

F First thing I would focus on is to fix system generated constraint names. When any constraint check fail you will get an idea about the issue just from a proper constraint name. I like these conventions: PK_ prefix for primary keys [P], followed by table name FK_ prefix for foreign keys [R], followed by table name and shortcut UQ_ prefix for unique constraints [U], followed by table name and optionally shortcut CH_ prefix for checks [C], followed by table name and shortcut NN_ prefix for NOT NULL constraints [C], followed by table name and shortcut As a shorcut I use column_name or some abbreviation. These conventions works only if your tables have reasonable long/short names (like <= 27 chars) due to Oracle limit on 30 chars. Or you can extend this since Oracle 12.2 to 128 chars . Let's check too long table names: SELECT t.table_name, LENGTH(t.table_name) AS len FROM user_tables t WHERE LENGTH(t.table_name) > 27 -- 27 = 30 - 3 chars for prefix (CS_, P

Constraints overview

F For me data integrity is essential, yet often overlooked by others. I remember project with 1000+ tables and zero foreign keys, because "they were slowing them down". Trading performance for data integrity should be exception, not a rule. If you know little about constraints I suggest to start in latest documentation to gain some background. You can also explore other articles with constraint label . To get a list of constraints in your schema you can query user_constraints view. SELECT n.table_name, n.constraint_type, n.constraint_name, n.status, CASE WHEN n.generated = 'GENERATED NAME' THEN 'YES' END AS generated FROM user_constraints n WHERE n.table_name NOT IN (SELECT object_name FROM RECYCLEBIN) ORDER BY n.table_name, n.constraint_type; Primary keys, foreign keys and unique constraints These queries can be used for more detailed overview of primary keys, foreign keys and unique constraints. Checks are a bit complic

Filtering Interactive Grid with/without APEX_IG

I I used to like Interactive Reports, but not anymore. They look different then Interactive Grids and if you have reports and grids on same page it doesn't look good. They sometimes behave differently and I dont like that. My main issue is that switching from report to grid is painful and time consuming. Everytime I do that I loose a lot of setup, pretty much everything you set on column level. For some time I am creating every report as Interactive Grid (mostly with edit off). I save plenty of time when users decide they would like to edit that report. Grids also offers some features missing on Reports like Column groups and Master-Detail feature. But one thing I always missed was the ability to set grid filters from PL/SQL. Grids did't have that feature, but since APEX 20.1 they have thru APEX_IG package . Previously I had to setup dynamic action and filter grid after it was visible to user. This grid refresh was unwanted and noticed by users. var region_id =

No to APEX Automations

A APEX Automations are out and it seems to me like it is just a layer above DBMS_SCHEDULER. I don't get it. If I have grant for  DBMS_SCHEDULER then I don't see any added value why I should use Automations ? To have more things hardcoded in APEX? If you don't have that grant why should you be able to do it via APEX? I just don't see the target group, because I prefer scripts over typing things into APEX. In my projects I have page for jobs (schedulers) overview, like running jobs, scheduled jobs, historic jobs with errors and details. Basically all I ever needed just in few reports and buttons. When specific job is selected I can enable/disable/drop job, restart job, investigate failures... You can check source code for these views: p940_running_jobs , p940_scheduled_jobs , p940_jobs_history , p940_jobs_details  which are based on user_scheduler_running_jobs , user_scheduler_job_run_details and user_scheduler_jobs views. You can also check source code of w

Dependencies

W What objects are used/referenced in selected object? You can explore user_dependencies view in latest documentation. SELECT d.referenced_type, LPAD(' ', (LEVEL - 1) * 2) || d.referenced_name AS referenced_name FROM user_dependencies d WHERE d.referenced_owner = USER CONNECT BY NOCYCLE d.name = PRIOR d.referenced_name AND d.type = PRIOR d.referenced_type START WITH d.name = '&OBJECT_NAME' --AND d.type = '&OBJECT_TYPE' ORDER SIBLINGS BY d.referenced_name; SELECT d.referenced_type, LISTAGG(d.referenced_name, ', ') WITHIN GROUP (ORDER BY d.referenced_name) AS referenced_names FROM ( SELECT DISTINCT d.referenced_type, d.referenced_name FROM user_dependencies d WHERE d.referenced_owner = USER CONNECT BY NOCYCLE d.name = PRIOR d.referenced_name AND d.type = PRIOR d.referenced_type START WITH d.name = '&OBJECT_NAME'

Fixing data type mismatches

T This is a script for fixing data type mismatches based on PK and FK relations. These columns should match but often they don't. WITH s AS ( SELECT a.table_name, a.column_name, c.position, -- CASE WHEN a.data_type = 'NUMBER' THEN a.data_type || CASE WHEN a.data_precision IS NOT NULL THEN '(' || a.data_precision || DECODE(a.data_scale, 0, '', ', ' || a.data_scale) || ')' END WHEN a.data_type IN ('CHAR', 'VARCHAR', 'VARCHAR2') THEN a.data_type || '(' || DECODE(a.char_used, 'C', a.char_length || ' CHAR', a.data_length) || ')' ELSE a.data_type END AS data_type, -- n.constraint_type, c.constraint_name, n.r_constraint_name FROM user_tab_columns a JOIN user_tables t ON t.table_name =

Table relations by foreign keys

L Let's explore table references by traversing foreign keys. First show all foreign keys for all tables (including multi columns keys). WITH t (table_name, referenced_table, pk_name, fk_name) AS ( SELECT DISTINCT r.table_name, p.table_name AS referenced_table, p.constraint_name AS pk_name, r.constraint_name AS fk_name FROM user_constraints r JOIN user_constraints p ON r.r_constraint_name = p.constraint_name AND r.constraint_type = 'R' WHERE p.constraint_type = 'P' UNION ALL SELECT NULL, t.table_name, NULL, NULL FROM user_tables t ORDER BY 1, 2 ), c AS ( SELECT r.constraint_name, LISTAGG(c.column_name, ', ') WITHIN GROUP (ORDER BY c.position) AS cols FROM user_cons_columns c JOIN user_constraints r ON r.constraint_name = c.constraint_name WHERE r.constraint_type = 'R' GROUP BY r.constraint_n

SYS_CONTEXT and APEX

I I have to admit I found this the hard way when users started to complaining about weird things (in production). As a big fan of SYS_CONTEXT  I used it among other things also for auditing columns (created_by, updated_by...) in procedures and triggers. I used Application Processes, set processing point to Before header and created application contexts for specific page and user (via package). Some people even use processes on Global page (page zero). Well, that was terrible idea. It worked fine whenever I tested it. But it didn't when users came. First I thought it was a user_id/client_id problem with DBMS_SESSION.SET_CONTEXT , but even if I matched the values in PL/SQL to values from APEX, it didn't helped. It still sometimes used values from someone else session. I couldn't find it anywhere, but I had a suspicion about sessions management. I knew that APEX sessions are not the same thing as database sessions, but I somehow expected dedicated db session for each APEX ses

Performance issues with APEX_UTIL.GET_SESSION_ITEM

I I faced severe performance degradation in one of my APEX applications. After some time I identified issue with APEX_UTIL.GET_SESSION_ITEM call. Let's create test table with 1M rows: --DROP TABLE test_table; CREATE TABLE test_table ( id NUMBER, group_id NUMBER ); -- INSERT INTO /*+ append */ test_table (id, group_id) SELECT ROWNUM, MOD(ROWNUM, 5) FROM DUAL CONNECT BY LEVEL <= 1000000; -- COMMIT; Check table content: SELECT group_id, COUNT(*) AS rows_ FROM test_table GROUP BY group_id ORDER BY 1; Now we can simulate query with APEX_UTIL.GET_SESSION_ITEM function. For this test item existance doesn't matter. Let's pretend, we want to select rows for specific group_id based on APEX item value. SELECT COUNT(*) AS rows_ -- 152 sec FROM test_table t WHERE t.group_id = NVL(APEX_UTIL.GET_SESSION_STATE('APEX_ITEM'), 0); Any ideas why this is that slow? It is because that function is called for every single row in that table. Here is a

Convert unique key to primary key

I I had a legacy table with unique key but without primary key. All columns were mandatory and there was no reason to not have a proper primary key. First, lets's check list of unique keys. SELECT c.table_name, c.constraint_name, LISTAGG(c.column_name, ', ') WITHIN GROUP (ORDER BY c.position) AS cols FROM user_constraints n JOIN user_cons_columns c ON c.constraint_name = n.constraint_name WHERE n.table_name = '&TABLE_NAME' AND n.constraint_type = 'U' GROUP BY c.table_name, c.constraint_name ORDER BY c.table_name, c.constraint_name; Script to switch UQ constraint to PK: DECLARE in_uq_constraint CONSTANT VARCHAR2(30) := '&UQ_NAME'; in_pk_name CONSTANT VARCHAR2(30) := '&PK_NAME'; -- in_table_name VARCHAR2(30); in_primary_cols VARCHAR2(32767); q VARCHAR2(32767); BEGIN -- get table name and primary columns SELECT p.table_name, LISTA

Parallel processing in PL/SQL

P Parallel processing demo: Prepare data for processing: --DROP TABLE test_par PURGE; CREATE TABLE test_par ( id NUMBER PRIMARY KEY, order# NUMBER, session_id NUMBER ); -- TRUNCATE TABLE test_par; -- INSERT INTO test_par (id, order#) SELECT LEVEL, LEVEL FROM DUAL CONNECT BY LEVEL <= 100000; -- COMMIT; Process data in chunks: DECLARE in_threads CONSTANT NUMBER := 8; in_chunk_size CONSTANT NUMBER := 15000; -- l_task_name VARCHAR2(64) := 'test_par#' || TO_CHAR(SYSDATE, 'HH24MISS'); l_exec VARCHAR2(32000); l_try PLS_INTEGER := 0; l_status PLS_INTEGER; BEGIN UPDATE test_par SET session_id = NULL; -- DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => l_task_name, comment => ''); DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID ( task_name => l_task_name, table_owner => USER, table

Oracle wrapper for Python

M Many years ago I created simple wrapper to get access from Python to Oracle database. You can find revisited version at my Git repository. All you need to do is to install cx_Oracle module . You can connect to database using pickle file (see oracle.py) or by following code: from oracle_wrapper import Oracle ora = Oracle({ 'user' : 'USER_NAME', 'pwd' : 'PASSWORD', 'host' : 'SERVER_NAME', # default port 1521 'sid' : 'SID', #'service' : 'SERVICE_NAME', # use sid or service_name }) Then you can execute any block of code with or without binded variables. You can also use autobind feature. And you can also use executemany to bulk process rows. data = ora.execute("BEGIN sess.create_session(in_user_id => :user_id); END;", user_id = 'JKVETINA') You can also fetch any data from Oracle database into named tuples so you can easily access them. data = ora.f

Zodiac sign evaluation

F Function to get zodiac sign for selected date. DECLARE in_date CONSTANT DATE := TRUNC(SYSDATE); curr_sign VARCHAR2(30); -- FUNCTION get_zodiac ( in_date DATE ) RETURN VARCHAR2 AS mmdd CONSTANT CHAR(4) := TO_CHAR(in_date, 'MMDD'); BEGIN RETURN CASE WHEN mmdd BETWEEN '0321' AND '0420' THEN 'ARIES' -- beran WHEN mmdd BETWEEN '0421' AND '0521' THEN 'TAURUS' -- byk WHEN mmdd BETWEEN '0522' AND '0621' THEN 'GEMINI' -- blizenci WHEN mmdd BETWEEN '0622' AND '0722' THEN 'CANCER' -- rak WHEN mmdd BETWEEN '0723' AND '0823' THEN 'LEO' -- lev WHEN mmdd BETWEEN '0824' AND '0923' THEN 'VIRGO' -- panna WHEN mmdd BETWEEN '0924' AND '1023' THEN 'LIBR

Calculate working days

C Calculate working days for each month. You can create function from this and adjust your national holidays. DECLARE last_month VARCHAR2(8); last_count PLS_INTEGER; -- FUNCTION get_easter_monday ( in_year PLS_INTEGER ) RETURN DATE AS a NUMBER; c NUMBER; e NUMBER; g NUMBER; n NUMBER; x NUMBER; BEGIN c := in_year / 100 + 1; x := (3 * c / 4) - 12; g := MOD(in_year, 19) + 1; e := MOD(11 * g + 15 + ((8 * c + 5) / 25) - x, 30); -- IF (e = 25 AND g < 11) OR e = 24 THEN e := e + 1; END IF; -- n := 44 - e; IF n < 21 THEN n := n + 30; END IF; -- a := n + 7 - MOD((5 * in_year / 4) - x - 3 + n, 7); -- RETURN TRUNC(TO_DATE(TO_CHAR(in_year) || '0301', 'YYYYMMDD') + a); END; -- FUNCTION is_business_day ( in_date DATE

Monthly calendar

S Simple calendar for whole year usable month by month. SELECT TO_NUMBER(SUBSTR(x.week, 3)) AS week, x.month, MAX(DECODE(TO_CHAR(x.day, 'D'), '1', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS mon, MAX(DECODE(TO_CHAR(x.day, 'D'), '2', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS tue, MAX(DECODE(TO_CHAR(x.day, 'D'), '3', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS wed, MAX(DECODE(TO_CHAR(x.day, 'D'), '4', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS thu, MAX(DECODE(TO_CHAR(x.day, 'D'), '5', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS fri, MAX(DECODE(TO_CHAR(x.day, 'D'), '6', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS sat, MAX(DECODE(TO_CHAR(x.day, 'D'), '7', TO_NUMBER(TO_CHAR(x.day, 'FMDD')))) AS sun FROM ( SELECT TRUNC(SYSDATE, 'Y&#

Pipelined output

S Short and simple demo for using pipelined functions. Perfect way how to get content from PL/SQL procedure accessible in SQL as a table. CREATE OR REPLACE FUNCTION string_split ( in_src VARCHAR2, in_sep VARCHAR2 ) RETURN str_4k PIPELINED AS i PLS_INTEGER; items VARCHAR2(32767) := in_src; -- TRIM(BOTH in_sep FROM in_src); BEGIN LOOP i := INSTR(items, in_sep); IF i > 0 THEN PIPE ROW(SUBSTR(items, 1, i - 1)); items := SUBSTR(items, i + LENGTH(in_sep)); ELSE PIPE ROW(items); EXIT; END IF; END LOOP; RETURN; END; / Data type is needed but it can be stored in package specification. CREATE TYPE string_4k AS TABLE OF VARCHAR2(4000); / And finally SQL: SELECT ROWNUM AS position, COLUMN_VALUE AS value FROM TABLE ( string_split('A,B,C,,,D,E', ',') );

Cloud certifications for 2021

N New cloud certifications for this year were announced today. You can check complete list of recently added certifications . If you want to pursue multiple certifications and you want also access to Oracle University materials you should consider buying whole Oracle Learning Subscription. I am interested in purchasing two cloud subscriptions. You can check  complete list . Oracle Cloud Infrastructure These certifications are updated and were previously included in Oracle Cloud Infrastructure Learning Subscription : Oracle Cloud Infrastructure Foundations 2021 Associate ( 1Z0-1085-21 ) Oracle Cloud Infrastructure Developer 2021 Associate ( 1Z0-1084-21 ) Oracle Cloud Infrastructure 2021 Architect Associate ( 1Z0-1072-21 ) Oracle Cloud Infrastructure 2021 Architect Professional ( 1Z0-997-21 ) Oracle Cloud Infrastructure 2021 Cloud Operations Associate ( 1Z0-1067-21 ) Oracle Cloud Infrastructure 2021 HPC and Big Data Solutions Associate ( 1Z0-1089-21 ) Oracle Cloud Infrastructure

Sending UTF-8 email with compressed attachments

T To send e-mail in UTF-8 with special characters in body, subject and even in recipient name and with attached file(s) I use following code. DECLARE PROCEDURE send_mail ( in_to VARCHAR2, in_subject VARCHAR2, in_body CLOB, in_from VARCHAR2 := NULL, in_attach_name VARCHAR2 := NULL, in_attach_mime VARCHAR2 := NULL, in_attach_data CLOB := NULL, in_compress BOOLEAN := FALSE ) AS smtp_from CONSTANT VARCHAR2(200) := 'Jan Květina <name.surname@domain.com>'; smtp_username CONSTANT VARCHAR2(50) := NULL; smtp_password CONSTANT VARCHAR2(50) := NULL; smtp_host CONSTANT VARCHAR2(50) := 'smtp.server.com'; smtp_port CONSTANT NUMBER(4) := 25; smtp_timeout CONSTANT NUMBER(2) := 20; boundary CONSTANT VARCHAR2(80) := '-----5b9d8059445a8eb8c025f159131f02d

Another LONG conversion

A Another way how to query LONG columns with SQL is to create function (or packaged function) for each column you need to convert. FUNCTION get_user_views_text ( in_view_name user_views.view_name%TYPE ) RETURN VARCHAR2 AS out_text user_views.text%TYPE; BEGIN SELECT v.text INTO out_text FROM user_views v WHERE v.view_name = in_view_name; -- RETURN out_text; END; Use function in SQL: SELECT v.view_name, v.text_length, get_user_views_text(v.view_name) AS text FROM user_views v ORDER BY 1; Or you can create dynamic function: FUNCTION get_long_string ( in_table_name VARCHAR2, in_column_name VARCHAR2, in_where_col1_name VARCHAR2, in_where_val1 VARCHAR2, in_where_col2_name VARCHAR2 := NULL, in_where_val2 VARCHAR2 := NULL ) RETURN VARCHAR2 AS l_query VARCHAR2(4000); l_cursor

Convert LONG content to lines

S Searching views source code is very difficult due to usage of prehistoric LONG data type. This script converts views to similar structure as user_source view. Let's create target table first: --DROP TABLE src_views; CREATE TABLE src_views ( name VARCHAR2(30) NOT NULL, line NUMBER(8) NOT NULL, text VARCHAR2(4000), -- CONSTRAINT pk_src_views PRIMARY KEY (name, line) ); Run the script: DECLARE in_view_name CONSTANT VARCHAR2(30) := '%'; -- PROCEDURE save_line_to_table ( in_name src_views.name%TYPE, in_line src_views.line%TYPE, in_text src_views.text%TYPE ) AS BEGIN INSERT INTO src_views (name, line, text) VALUES ( in_name, in_line, in_text ); END; -- PROCEDURE clob_to_lines ( in_name VARCHAR2, in_clob CLOB ) AS clob_len P

Pivoting tables

T This article is dedicated to my colleague and keen learner strugling today with pivot query. This is modern version of pivot query introduced in Oracle 11g. Bad part is you have to list columns. WITH d AS ( SELECT 'A' AS group_name, 'A' AS col_name, 10 AS col_value FROM DUAL UNION ALL SELECT 'A' AS group_name, 'B' AS col_name, 20 AS col_value FROM DUAL UNION ALL SELECT 'A' AS group_name, 'C' AS col_name, 30 AS col_value FROM DUAL UNION ALL SELECT 'A' AS group_name, 'D' AS col_name, 60 AS col_value FROM DUAL UNION ALL SELECT 'B' AS group_name, 'A' AS col_name, 10 AS col_value FROM DUAL UNION ALL SELECT 'B' AS group_name, 'B' AS col_name, 20 AS col_value FROM DUAL UNION ALL SELECT 'B' AS group_name, 'C' AS col_name, 30 AS col_value FROM DUAL ) SELECT d.* FROM d PIVOT ( SUM(d.col_value) AS sum, COUNT(d.col_value) AS count FOR col_

Simple LDAP verification

T To verify user against LDAP/AD you can use this simple script. If you need more you should explore this great article about LDAP  or this great article for use in APEX or latest documentation for DBMS_LDAP . DECLARE FUNCTION ldap_login ( in_login VARCHAR2, in_pwd VARCHAR2, in_server VARCHAR2, in_port NUMBER := 389 ) RETURN BOOLEAN AS l_sess DBMS_LDAP.SESSION; l_resp PLS_INTEGER; BEGIN IF (in_login IS NOT NULL AND in_pwd IS NOT NULL) THEN DBMS_LDAP.USE_EXCEPTION := TRUE; l_sess := DBMS_LDAP.INIT(in_server, in_port); -- must match with ACL l_resp := DBMS_LDAP.SIMPLE_BIND_S(l_sess, in_login, in_pwd); l_resp := DBMS_LDAP.UNBIND_S(l_sess); RETURN TRUE; END IF; RETURN FALSE; EXCEPTION WHEN OTHERS THEN BEGIN l_resp := DBMS_LDAP.UNBIND_S(l_sess); EXCEPTION WHEN OTHERS TH