Facebook

Thursday, August 6, 2020

Curl examples


1. Usage: Custom headers.

$: curl -v --cookie "_nmid_secure=YOUR_COOKIE" \
-H "n-user-forwarded={\"customerId\":\"\"}" \
http://hostname:8082/path/to/endpoint


2. Usage: Cookies.

$: curl -v --cookie "_nmid_secure=YOUR_COOKIE" \
http://hostname:8082/path/to/endpoint/

Tuesday, March 24, 2020

Kubernetes, helm, microk8s, Docker: Tips and tricks



1. Usage: Delete all completed pods by namespace(say namespace is "internal").

$: kubectl delete po --namespace internal $(kubectl get po --namespace internal | grep Completed | awk '{if ($1 !="") print $1}')
2. Usage: Delete all pods by namespace(say namespace is "default").

$: kubectl delete po $(kubectl get po --namespace default | awk '!/NAME/'| awk '{if ($1 !="") print $1}')
3. Usage: Delete all releases from helm except few

$:helm delete --purge $(helm list | awk '!/NAME/' | awk '!/mysql/' | awk '!/redis/' | awk '!/php-myadmin/' | awk '{if ($1 !="") print $1}')
4. Usage: Cleans all dangling images. This is useful for removing intermediate images left over from multiple builds

$:alias docker_clean_images='docker rmi $(docker images -a --filter=dangling=true -q)'
5. Usage: Removing stopped containers

$:alias docker_clean_ps='docker rm $(docker ps --filter=status=exited --filter=status=created -q)'
6. Usage: To remove ALL of your cache, make sure all containers are stopped and removed, So

$:docker kill $(docker ps -q)
$:docker_clean_ps
$:docker rmi $(docker images -a -q)

Wednesday, January 30, 2019

Node


SI Content Description
1. const sql = connection.format("SELECT * FROM table WHERE foo = ?", ["bar"]);
console.log(sql);
Build the query before execution
2. "debug-test": "mocha --inspect-brk -t 1500" //Add above line to package.json and run
npm run debug-test
Debug test in local
3. Merge objects with key value, values being integer, sum the values
 
_ = require('lodash');
//Merge the array and sum values corresponding to same carrier
//Example: x={ abc: 1, abd: 5 }, y= { abd: 2, abe: 5 } =>{abc: 1,abd: 7, abe: 5 }
merged = _.mergeWith({ ...x }, { ...y }, (a, b) => {
    a = a && _.isNumber(parseInt(a)) ? parseInt(a) : 0;
    b = b && _.isNumber(parseInt(b)) ? parseInt(b) : 0;
    return a+b;
});

Sunday, July 1, 2018

Performance Optimization of Relational Database

Case study based on the experience in performance enhancement of an application done with Qburst Technologies Pvt Ltdhttp://www.qburst.com/

MySql Based Applications - Improve Performance with Application Level Optimization


MySql is one of the data base solution used by most of the small and medium scale organization for their web applications. MySql delivers good performance with well designed table structures an relations. Most web applications deals with mysql performance issues when volume of data grows beyond expectations.
Usually when we design web based applications with mySql in the backend we follow normalized design pattern with look-up tables and foreign key relations. This kind of normalized design ensures good performance and less read write time when the data volume is within expected limits. (ie, practically there is an upper limit on volume of data that can be handled with relational databases and is defined by type of data and nature of data access).
Data volume within limit:
Usually large volumes of data are populated when we deal with real world scenario like daily activities, stock market data, social media data etc which are usually dependent on human interactions. We can predict user behavior only upto a limit and so is the volume of data being populated in such scenarios.

When data volume grows:
If we go for normalized design, we should keep similar kind of data in same place and searching for mutually related data may force us to go for self joins in the heavily populated tables. Self join of heavily populated tables is going to be a big issue in the performance point of view. When data volume grows the normalized structure may not deliver expected performance due to many factors.
Barriers:
One of them is foreign key relations and another is the normalized structure itself. We use foreign keys to keep track of relational data kept in normalized tables, which can cause issues like increased insert/update time caused by the overhead on the database to create and maintain indexes.
Alternatives:
There is a cap on volume of data that can be effectively handled by mysql with normalized structure, which force us to think about alternatives like non-relational database like mongo db. An issue we are going to face with non-relational database is handling relation of mutually dependent real world data. There are ways to tackle the situation.
Budget:
Even though non relational databases promise faster reading, there are issues like poor write speed. Obviously there are techniques to tackle the issues in non relational database, most of them are heavy in terms of cost and/or effort for small and medium scale organizations.
Economic alternatives:
The main question that gains popularity is "Is it possible to extend the practical upper limit on volume of data that can be handled effectively?" and the answer is YES
We can increase the upper limit on volume of data that can be handled with mysql by following some techniques like de-normalization, archiving etc.
MySql can respond to normal select queries better compared to select queries including joins with same or other tables.
De-normalization can help improve read time in many ways. One of them is keeping data in a purpose oriented manner. De-normalization can improve performance of data read by duplicating data to multiple tables that are meant for display purpose, thus avoiding joins.
For example to make display of data faster, we can keep data optimized for display in a separate table that is used for display regardless of duplication. Now one possible issue is to manage consistency of data. In normalized and properly designed databases, consistency of data is assured by database to a large extend. But as we keep duplicate copies of data for making data read faster, the responsibility of keeping the data kept in multiple locations(data duplication) consistent should be taken from database to your application. This approach can increase the coding effort that take care of data handling and update in multiple copies of data.
How:
De-normalization can help in reducing reading time if we include additional tables that can keep collective data. We can do this by creating rules for data display in all possible scenarios and combined display rules can be handled together with less overhead and duplicate data to multiple tables in a way that each one is optimized for its purpose. This can improve performance of data population at front end, at the same time increasing processing overhead to the code.
Archiving:
Real world data usually become in consistent or less accessed as time pass by. For example if an application manages marketing statistics, data of last 6 months or 1 year may be relevant and previous data becomes less relevant or less frequently accessed. The less relevant or less accessed data can be moved from main database to an archiving system, which can be another database or even tapes or disks based on demand for old data. If the user needs previous data, which can be provided on demand.
Observe your data:
There are techniques that can make mysql perform better but depends on structure and behaviour of data. Structured monitoring can give an idea on the root cause of the problem and based on which proper solution can be formed. Log your data access, create metrics of data access and find data access peeks and performance falls. These peeks and falls can explain the real problem/root cause. Solution can be formed based on this real problem.

Based on optimization tasks I worked on a project for Qburst Technologies Pvt Ltd.

References: 

https://www.toptal.com/sql-server/sql-database-tuning-for-developers


https://www.toptal.com/database/the-definitive-guide-to-nosql-databases

Friday, May 4, 2018

React Experiments


OS: ubuntu 16.04 $sudo apt install npm $npm -v 3.5.2 $create-react-app pwa-experiment Creating a new React app in /var/www/reactTrials/first/pwa-experiment. You are using Node v4.2.6 so the project will be bootstrapped with an old unsupported version of tools. Please update to Node 6 or higher for a better, fully supported experience. Installing packages. This might take a couple of minutes. Installing react, react-dom, and react-scripts... $sudo ln -s /usr/bin/nodejs /usr/bin/node $npm install -g create-react-app cd pwa-experiment $ npm install --save react-router@3.0.5 $ ls -la total 132 drwxrwxr-x 5 anas anas 4096 May 5 01:46 . drwxrwxr-x 3 anas anas 4096 May 5 01:40 .. -rw-rw-r-- 1 anas anas 218 May 5 01:42 .gitignore drwxrwxr-x 760 anas anas 28672 May 5 01:46 node_modules -rw-rw-r-- 1 anas anas 406 May 5 01:46 package.json drwxrwxr-x 2 anas anas 4096 May 5 01:42 public -rw-rw-r-- 1 anas anas 80668 May 5 01:42 README.md drwxrwxr-x 2 anas anas 4096 May 5 01:42 src

Tuesday, November 28, 2017

Sunday, July 23, 2017

Mercurial: Tips and tricks


1. Usage: Completely remove a named branch.

$: hg strip "branch(${BRANCHNAME})"

2. Now re-iterate for all the branches you have, that's it

$:hg pull -u

3. To check if you have any unwanted changes

$:hg outgoing


4. Add, Remove tag

$:hg tag -r newrevisionhash stable
$:hg tag --remove stable


5. Incoming and Outgoing

$:hg incoming
$:hg incoming -p
$:hg incoming -b <branchname>
$:hg incoming -p -b <branchname>

6. Copy files across branches

$:hg update -r to-branch
$:hg revert -r from-branch file
$:hg ci -m 'copied single file from from-branch to to-branch

7. Dirty merge If both branches have made changes to a file that you want to keep, a dirty trick would be to create a merge of the two branches using hg merge, possibly/probably on still another branch, check that in, and then copy a single file between the merge and the to-branch(Reference: Stackoverflow)

$:hg update -r to-branch
$:branch merge-branch
$:hg merge -r from-branch
$:hg ci -m 'temp merge to be discarded"
$:hg update -r to-branch
$:hg revert -r merge-branch single-file
$:hg ci -m 'merged single-file from from-branch to to-branch"
$:hg strip merge-branch

8. To see heads in a particular branch <branch name>

$:hg heads <branch name>


9. To commit as a user <userDisplayName>

$:hg commit -u <userDisplayName>