2015-12-31

Happy New Year

This was supposed to be a rather lengthy post, but zscaler was tired today, making me waiting for gateway.zscaler.net for to long, and now time is running out but...

Another year have past again. At work I didn’t accomplish at all what I had hoped for. But I have done some other things, like helping out with evaluating software and some integration problems. I did not do one line of JavaScript coding which I really had hoped to do, this is only partly due to lack of time I hadn’t any good projects for javaScript and I do not code just for the sake of coding. I didn’t buy a raspberry Pi same thing there I didn’t have a good project for a raspberry Pi. I didn’t try Perl6. There are a lot of things I didn’t do. But I tested the Amazon infrastructure learned a few thing about Azure cloud. I managed to create a travel expense report by myself, I never done that before so I’m happy for that.   
Two colleagues have left this year, that is never fun. A data warehouse specialist left and that was not only un-fun it was almost painful, I’m still emotionally attached to my old data warehouse and those who work there.
Actually the zscaler just went bananas, it blocked my internet access with this prompt
I would never type in a password in a frame like this. Need help contact your IT team, this is the 31 of december at 16:45 do not think anyone will answer.
Years end is the most IT-important day of the year, and you should make an extra effort to avoid anything that can interrupt or disrupt IT services this and next day of the year.

2015-12-06

Extracting SAP projects with BAPI - 3

I am restructuring some long running Data Warehouse extraction workflows. In total these workflows takes some 4 hours today, which is ridiculous. One part of restructuring the workflows is to use modern Integration Tag Language idioms, so newcomers  can understand, the present archaic syntax is a bit tricky. I have rewritten the second part in much the same way as the first.
So far I cut down execution time from 4 hours to 30 minutes. This is achieved by parallelizing the jobs running the BAPIs. I have rewritten the last parts of the workflow much in the same way I rewrote the first part.
The result is good, but not good enough Still the runtime is dependent on the amount of objects defined in SAP, in a few years when projects have doubled so will the runtime. I strongly advocate full load over delta load, since full load is much simpler to set up and is self healing if something goes wrong. But here is a case when full load is not performant enough, 30 minutes and growing by the day. I will rewrite these workflows from full load into a hybrid delta load where I aim at a more stable run time below 10 minutes.    

One job in the last rewrite is of interest: SAP information BAPIs are structured with one BAPI giving a list of keys to all objects and then you have an army of BAPIs giving detailed information about individual objects. BAPI_NETWORK_GETINFO is a bit different it takes an array of network identities and respond with detail info of all objects in one go, here the € list operator comes to the rescue, it  takes a PHP array and reformats it into a RFC import array.

The BAPI_NETWORK_GETINFO is run once for all networks in sequence.

  1. The <forevery> job iterator creates the iterator from the NWDRVR mysql table.
  2. Then runs the BAPI_NETWORK_GETINFO BAPI for each row in the SQL result table one by one. (Addressed by @J_DIR/driver1)
  3. Lastly stores all results in corresponding MYSQL tables   

If the list of network objects is large enough you have to spit the array into chunks and execute them separate to overcome SAP session limits and performance problems. We have some 9000 projects and that is to many in our environment to execute in one go.
A small rewrite of the job will split the SQL result into 8 chunks and distribute them over separate workers and execute the in parallel:

Here BAPI_NETWORK_GETINFO is run in 8 parallel workers.

  1. The <forevery> iterator splits the SQL result into 8 chunks, each chunk is executed by a separate worker in parallel
  2. Each worker then runs the BAPI_NETWORK_GETINFO BAPI for each row in the SQL result table of the worker one by one. (Addressed by @R_DIR/driver1)
  3. Lastly each worker store all results in corresponding MYSQL tables

With this slight modification the run time for the job is cut by a factor of 8. This is really parallel programing made easy. Compare this with visual workflow programing so popular today, I think you will agree with me this is easier to set up.

2015-11-29

What's in a return code?

The last week I have mused on ‘What’s in a return code and what it is good for’. It started with the innocent question:
‘How do I see which job in a workflow that bombed out’?
‘The first job with result equals zero’.
‘There is no zero result, there are only ones and nulls’.

My job scheduler return codes are boolean 1=success, 0=failure’. It is not entirely true, the return code can be NULL, which normally means not executed yet. I decided to take a look in the log:

The first job without a return code is trunc_dsaldo, up until trunc_dsaldo all jobs have executed successfully (result=1), it turned out trunc_dsaldo was successfully bypassed, the boolean return code does not really allow for a third ‘bypassed’ condition. The registration of a bypassed job is bypassed altogether so it is impossible to tell a bypassed job from a not executed job.  

I like boolean return codes.  Either a job executes successfully or not, it could not be more simple if it were not for the bypass condition. In this particular case it was the next job dsaldo_read who failed, due to an infrastructure fuckup the job failed and the connection to the database log table was lost, so it could not register a failure. A very unlikely situation, but nevertheless it happened.  

What is the a return code good for?
The most obvious reason the return code should tell the result of a job? In this case it does not do that well. You can argue the result of a bypassed job is unknown and should be left with a Null return code, but you can also say it was successfully bypassed and should qualify for a successful return code, but a bypassed job can be seen as a failure. Right now I lean towards giving bypassed a unique non zero return code but keeping the boolean type. This approach keeps the boolean simplicity but has a side effect it indicates the job was successfully bypassed. I still do not know if this is a good thing or not. I have to scrutinise some unnecessary complex code carefully before I make any changes. If I decide to change the code I will rewrite job related ‘return code’ code, since it has been subject for some patching during the years.
Another and maybe the most important function of a return code is testability, for successor jobs to test the outcome of a predecessor, that has already been taken care of, you can set up a job prereq testing the outcome of a predecessor job example:
<job name=’successor’...>
  <prereq type='job' predecessor='previousJob' result='success’ bypassed='ok'/>  
The successor job will run if the execution of previousJob was a success or previousJob was bypassed.
But the job return code has not got the attention it deserves, it’s a nice way to say there is some odd logic and bad code lurking in my job scheduler concerning return codes. Maybe return code should be a class. I’m not much of an OO fan, but return codes are important and maybe deserves a class of it’s own.

2015-11-15

Extracting SAP projects with BAPI - 2

In the previous post I described how we cut the time of extracting projects data from SAP. In this post I will show the new improved workflow coded in Integration Tag Language where workflows are coded in schedules. The first part of the project's schedule looks like this:


This is just initialisation of constants and logic needed for the execution of the schedule. The second part:


Extracts the PROJ table from SAP and creates an iterator with all projects. The last of these jobs ‘generate_driver’ creates a PHP array called driver0 which is the iterator used to drive the BAPI jobs. The first set of BAPI jobs are the quick ones that runs on less than 10 minutes:


These jobs run in parallel, you only insert a parallel=’yes’ directive on the job statement. The iteratorn from the generate_driver job is declared by the <mydriver> tag and iterates the BAPI once for each project in the array0 iterator. The id of the project is transferred via the @PROJECT_DEFINITION variable, as you can see inside the <rfc> section. In the <sql> section the wanted tables are declared, default are all tables, BAPIs often creates a lot of ‘bogus’ tables so we explicitly state which tables we like to import to the Data Warehouse. The next job is a bit more complex it is the long running BAPI_PROJECTS_GETINFO job:


In part one we decided to distribute the workload over 9 workers, by doing so we need to truncate the tables upfront since we do not want our 9 workers to do 9 table truncations. First we create a dummy job as a container for our jobs, which we declare parallel=’yes’ so the job run in parallel with the preceding jobs. Inside the dummy job there is a table truncate job and subsequently the BAPI extraction job. Here the iterator array0 is defined with the <forevery> tag, the iterator is split up in 9 chunks which all will be executed in parallel. The rows in each chunk are transferred as before by the <mydriver> iterator which is given a chunk by the piggyback declaration. If you study this job carefully you will see there are some very complex processing going on, if you want a more detailed description I have written a series of posts on parallel execution. I am very happy about the piggyback coupling of the iterators, by joining the iterators a complex workflow is described both succinct and eloquent.
The 5th and last part of the schedule shows a job similar to the one just described, this time we only need to run the BAPI job in two workers:


If you take the time and study this ITL workflow you will find there are some advanced parallel processing in there reducing the run time from about one and an half hour to less than 10 minutes. But so far we have used brute force to decrease the run time, by applying some amount of cleverness we can reduce the time even further and make it more stable. I hope to do this another weekend and if I do I write a post about that too.

Extracting SAP projects with BAPI - 1

Some years ago I was asked to extract project information from SAP for reporting/BI purposes. I decided to base the extraction solely on BAPIs. I wanted to test the BAPIs thus avoiding writing ABAP code and/or tracing what SAP tables containing project info. It sounded like a good strategy no SAP development just clean use of SAP premade extraction routines.  It turned out to be quite a few BAPIs I had to deploy for complete extraction, first I started with the project list BAPI:
BAPI_PROJECTDEF_GETLIST to get all projects (if you are not familiar with BAPI extraction read this first). Then I just had to run all the other BAPI one by one for each project:
BAPI_BUS2001_GET_STATUS
BAPI_PROJECTDEF_GETDETAIL
BAPI_BUS2001_GETDATA
BAPI_PROJECT_GETINFO
BAPI_BUS2054_GETDATA
BAPI_NETWORK_GETLIST
BAPI_NETWORK_GETINFO
BAPI_NETWORK_COMP_GETLIST
BAPI_NETWORK_COMP_GETDETAIL
BAPI_BUS2002_GET_STATUS
BAPI_BUS2002_ACT_GETDATA
BAPI_REQUISITION_GETDETAIL
BAPI_PO_GETDETAIL


In the beginning it was fine running these BAPIs in sequence, very few projects only one company (VBUKR) using projects. Last time I looked it took about 30 minutes to run the routine, it was a long time but what the heck 30 minutes during night time it’s not a big deal. Last week I had a call from present maintainers of the Data Warehouse, “Your project schedule takes hours and hours each night. The code is a bit ‘odd’, can you explain how it works, so we can to do something about it”. To understand the ‘archaic’ code in the schedule first thing I had to do was to clean it up, replacing obsolete idioms with more modern code constructs others could understand. Then I split the original schedule into smaller more logical schedules, the first one consisting of:
BAPI_PROJECTDEF_GETLIST
BAPI_BUS2001_GET_STATUS
BAPI_PROJECTDEF_GETDETAIL
BAPI_BUS2001_GETDATA
BAPI_PROJECT_GETINFO
BAPI_BUS2054_GETDATA

took more than two hours to run.  A look into the projects data showed 16000+ projects belonging to more companies than I created the extraction for. Now we replaced the BAPI_PROJECTDEF_GETLIST with direct extraction of the SAP PROJ table selecting only the interesting company about 8000 projects and run the BAPIs in parallel  this brought down the execution time to about 1 hour 20 minutes. Analysing job statistics showed the three first BAPIs only took little more than 500 seconds each, BAPI_PROJECT_GETINFO 5000 seconds and finally BAPI_BUS2054_GETDATA about 1000 seconds. Distributing  BAPI_PROJECT_GETINFO on 9 workers and BAPI_BUS2054_GETDATA on 2 workers should make all BAPI execute in between 500 to 600 seconds. This is a balanced scheme and the execution time is acceptable, from over 2 hours to 10 minutes. In the next post I will show the new improved execution schedule.

2015-11-01

Understand the ISO 8601 date format

How hard can it be? It seems  to be incomprehensible hard for some to understand the ISO 8601 date format YYYY-MM-DD and acknowledge ISO 8601 as the international standard.
.


One example;  at the company we use Microsoft sharePoint collaborative software. As an american i.e. U.S.A company Microsoft is unaware of ISO standards e.g. date formats are national in sharePoint. The swedish date format in sharePoint is ISO 8601 since we swedes since long have adopted the SI/ISO standard. The US date format is not ISO 8601. The US sharePoint admins at the company are unaware of ISO and do not want to promote the ISO 8601 as default as they believe this is the swedish date format only, instead they use the US date format since they think the US standard is the world standard. If Microsoft could introduce the ISO 8601 as a recognised date format in sharePoint, it would be much easier for me to evangelize the benefits of using one recognised standard format as default for dates in the company. This will probably only happen after the Chinese have established a Chinese hegemony, their date format is sort of big endian in accordance with ISO 8601.



      

Good reading for programmers

Once in a while you stumble upon a great post, this one by Sean Hickey is also quite fun The evolution of a software engineer.

2015-10-25

Hello SUSE!

Today I logged on to SUSE Linux for the first time!
First I had to install a vnc server. I logged on with the help of putty and fired up yast in tty mode.
I installed tigerVnc, with yast it was a breeze.


Then I ‘vnc logged into’ SUSE, since it has a KDE gui I felt at home right away.

Now I ‘only’ have to create an ETL server for the Data Warehouse, this will take some time, since I have to do this in my spare time, it is sort of a hobby to me, I do IT architecture during work hours. I will blog about my progress.

2015-10-22

Data Warehouse upgrade

After procrastinated a general upgrade of the Data Warehouse for a long time we now have gone from Ubuntu 12.04 to 14.04. To my great surprise this upgrade seems to have worked well. We also painlessly upgraded MySql from version 5.5 to 5.6. We tried MySql 5.7 but found a number of incompatibilities, so we decided to wait with MySql 5.7 until we have ironed out any problem. We had a moment when we thought we had kissed the japanese DW satellite goodbye, after upgrading the server we restarted the server but it didn’t came up again. We had to wait for the guys in Japan to push the power button. But on the whole it was a smooth transition, about ten or so upgraded servers without any major issues.
After the upgraded Data Warehouse landscape have stabilized, we will also try to move our ETL server from Mageia to SUSE Linux. The Data Warehouse started on Mandrake Linux 2001 and continued on Mandriva and lately on Mageia. These Linux distros have served us well but now we will try SUSE not only for the hell of it, SUSE has a partnership with SAP and that might make the Data Warehouse more acceptable in the company. The sad story is nobody in the company knows what Mageia Linux is which makes the Data Warehouse kind of an oddball. SUSE Linux is more digestible for the IT community in the company. But we will try openSUSE not SUSE Linux Enterprise, after all the Data Warehouse was not formed after the  standard enterprise model.

2015-09-21

Curl and sharepoint authentication

Our sharepoint uses NTLM for authentication, it took me some time to figure that out. I access a sharepoint site from our Linux Data Warehouse environment using curl. Seems like curl can handle NTLM, but this is what curl says about NTLM.  

The NTLM authentication method was designed by Microsoft and is used by IIS web servers. It is a proprietary protocol, reverse-engineered by clever people and implemented in curl based on their efforts. This kind of behavior should not be endorsed, you should encourage everyone who uses NTLM to switch to a public and documented authentication method instead, such as Digest.

curl --ntlm -u userid  http://sharepointServer/thePathTo/Items

2015-09-13

Big O



Recently I came across something that reminded me of my youth, big-O calculations. Big-O is used to calculate the complexity of algorithms or programs, big-O gives you a rough worst case estimation of the performance of an algorithm, and how runtime and space grows relative to the size of the input.
I didn’t know anything at all about big-O, but replied ‘do anyone really do that kind of calculation?’, when I was asked about big-O.  It reminds me of the time when people tried to estimate run times of programs by calculating revolving speed of drum memories and the time of machine instructions etc. When I started work with computers people had just stopped doing that type of calculations, and I was very happy for that.
I was told big-O is a big thing, it gives a worst time measurement of the algorithm the higher big-O figure the likelier it will run slower. But there is so many ifs and buts, these days you cannot do proper run time calculations just by evaluating the source code, you need to understand the program optimizer, high level instructions, libraries, hardware, opsys. If you have a program language good at  e.g. parallel optimize your code with a JIT compiler that may skew your calculations, if I got big-O right.
I have checked with some younger colleagues, one said when I asked, ‘I recall this from school,  but I never used it in real life, real runtime figures are dependent on so many other things, I can image it can be of use for assembler or C programmers, but for modern high level programming languages it is probably of limited use’. The others I asked did not know what big-O was, one said he remembered something about calculating runtime of programs.  
With my limited knowledge, I see big-O as a simple and clever option to programmatically compare different program snippets. This can come handy if you are developing a program language optimizer or a JIT compiler, but otherwise it is of little use. Big-O is an interesting subject and gives food for thought. I would not be surprised if I will use big-O one of these days.


I have spent years of optimizing computer systems, anything from network throughput to assembler algorithms, SQL queries, physical IO of databases etc, etc. I ‘only’ used real measurement, which you can rely on since they are real.    


Links I found useful when I studied big-O:

http://discrete.gr/complexity/
https://www.interviewcake.com/article/big-o-notation-time-and-space-complexity
http://bigocheatsheet.com/

2015-09-02

Summer's (almost) gone

Summer ended 1st September 2015 here in Stockholm. We had a fantastic summerly August which is rare here, most years summer is over in mid August, but not this year.
Autumn view from the office 2nd September, rainy but still warm.


I spent some time this August getting acquainted with MS SQL Server, it was an unexpected pleasant experience. Last time I  looked at SQL Server was 2001 and at that time I disregarded the software as a toy not fit for serious work. That is not true anymore, it looks like a good serious database manager. I have to spend a lot more time with the database to really understand SQL Server, but it was a breeze to create a database with constraints, indexes and all that. The management studio is a nice development environment. Transact-SQL will probably take a long time to master. On the whole SQL server is the same as other mature relational database managers, but different (they all are :-).
While working with SQL server I realised I miss DBA work, real proper DBA work, designing database, fixing performance problems, helping developers with tricky SQL queries. Not only installing the software and running utilities. I have to do something about this :-)

2015-08-09

The DW satellite is down

Last Friday MK-check sent alarms about the Data Warehouse satellite in Japan,  
indicating the server was down or not contactable. The most likely cause was problems with the communications after all the server is on the other side of the globe. We decided to wait over the weekend and see if the communication is reestablished. And sure enough late that evening we got a reassuring mail telling all was well.
Since it was weekend and the replication process is self healing we didn’t have to do anything, if anything was wrong in the database it would be corrected the day after by the next replication cycle. That’s the way I like to build processes, no manual monitoring or intervention needed, computers should be able to fix operational problems themselves.


This satellite ‘server’ is a tiny  Dell Optiplex 790 with an Intel I3 processor 4GB RAM and  two 1TB SATA disks, costed less than 800€.
We installed the satellite server as temporary cache 2012 to overcome the slow communication line between the HQ and the Osaka plant. I figured if it lasts a year it would  be good enough, (the network guys were just about to crank up the bandwidth). Now into the 4th year of continuous operation the computer is alive and kicking well, serving the factory guys with local response times for the cost of the electricity to run the computer. I have not seen similar features in any other Business Intelligence system. No matter how fast your high cost super duper central BI system is, you will not beat this tiny satellite at the remote branch office, noway.


Sometimes when I brag about the Data Warehouse, people tell me ‘Don’t be ridiculous you can’t be better than IBM, Oracle, SAP, Microsoft and the others’. It is very hard to prove I’m better than the competition, luckily for me, I do not need to do that, the Data Warehouse is there kicking ass with the competition. Normally I do not make a lot of fuss about myself, but sometimes it’s hard to be humble, I’m better than the big guys. That doesn’t mean I disrespect the others, far from it I have a lot of respect for what they have done, but someone has to be best, it’s simple as that. It’s not always about being smarter and having more resources, sometimes it is about having the simpler concept and execute it well.     


2015-07-25

Simples vacances

This years vacation project Restful web and Sqlite. For some reason I never done any serious web development. Last week i decided to learn what restful is all about. I started with RestServer and it actually turned out to be very simple to use. I also had to learn how to use Composer, that to turned out to be very simple to use once you know you have to include the Composer bootstrap vendor/autoload.php, it took me some hours to figure that out. Just for the hell of it I added Sqlite for data storage, I never used it before, very simple to use, a lean mean SQL machine. Now I’m armed with with new tools waiting to be used. Simple tools to use but powerful. I have some ideas how I can use my new skills.  
For once what I thought should be a simple project actually was simple, Normally I’m confused in the beginning, I am not a good reader of documentations I just crack on with things,often that’s not a good strategy but for once it was. 

The Sqlite manual in my lap was simple to read, that’s why I look so happy :)  
Somewhere in France 2015, Montpellier? No it was Gignac.