Connect to mysql through ssh tunnel from php
shell_exec("ssh -fNg -L 3307:$dbServerHost:3306 user@remote_host");
$conection = new mysqli($dbServerHost, $username, $password, $dbname, 3307);Secure file download in PHP, Security question PHP, PHP MYSQL Interview Question -Books download - PHP solutions guidelines queries update, phpmysqlquestion
shell_exec("ssh -fNg -L 3307:$dbServerHost:3306 user@remote_host");
$conection = new mysqli($dbServerHost, $username, $password, $dbname, 3307);Composer show is the command that will provide the list of packages i.e. installed by you or your team member.
command is
"composer show -i"
ELK enterprise application - elk quick build - logstash
1, install JDK
elasticsearch, the operation of logstash depends on the java environment.
Download and unzip the jdk binary package.
- tar xf jdk-8u144-linux-x64.tar.gz -C /usr/local
- mv /usr/local/jdk1.8.0_144 /usr/local/java
- cd ~
Configure the java environment variable.
Add the following at the end of the ~/.bashrc file:
- export JAVA_HOME=/usr/local/java
- export JRE_HOME=$JAVA_HOME/jre
- export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/bin/tools.jar:$JRE_HOME/lib
- export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
Make the configuration take effect.
source ~/.bashrc
2, install Logstash
It is recommended that the Linux class server download the rmp package installation.
2.1. Download the logstash installation package
- touch /etc/default/logstash
- ln -s /usr/local/java/bin/java /usr/bin/java
- rpm -ivh logstash-6.2.4.rpm
- cd ~
2.2. Configure systemd to start
When installing rpm, the configuration file for creating the startup script is /etc/logstash/startup.options
/usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Note: When the script fails to start, you can create your own startup script.
- [root@l ~]# cat /etc/systemd/system/logstash.service
- [Unit]
- Description=logstash
-
- [Service]
- Type=simple
- ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
- ExecStop=/bin/kill -s QUIT $MAINPID
- ExecReload=/bin/kill -s HUP $MAINPID
- WorkingDirectory=/usr/share/logstash/bin
-
- [Install]
- WantedBy=multi-user.target
-
- [root@l ~]# systemctl daemon-reload #####Update
- [root@l ~]#
- [root@l ~]# systemctl list-unit-files |grep logstash
- logstash.service disabled
- [root@l ~]#
- [root@l ~]# systemctl restart logstash.service #### Restart
2.3. Errors encountered
[root@l opt]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Using provided startup.options file: /etc/logstash/startup.options
Manually creating startup for specified platform: systemd
/usr/share/logstash/vendor/jruby/bin/jruby: Line 401: /usr/bin/java: No such file or directory
Unable to install system startup script for Logstash.
Solution
- ln -s /usr/local/java/bin/java /usr/bin/java
- /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
3, configuration
- cd /etc/logstash/conf.d/
- chown -R logstash /etc/logstash/conf.d
- mkdir /opt/logstash
- touch /opt/logstash/messages
- chown -R logstash /opt/logstash
- chown -R logstash /opt/logstash/messages
- chown -R logstash /var/log/messages
Shipper configuration file (logstash_shipper.conf)
- vim logstash_shipper.conf
- ###########################################3
- input{
- file{
- type => "messages"
- path => "/var/log/messages"
- start_position => "beginning"
- sincedb_path => "/dev/null"
- }
- }
-
-
- output{
- if [type] == "messages"{
- redis{
- host => "10.0.0.132"
- data_type => "list"
- key => "messages"
- port => 6379
- db => 2
- password => "123456"
- }
- }
- }
Indexer configuration file (logstash_indexer.conf) Note: This configuration file must be re-node node, otherwise the two output will repeat the output log, plus the redis cache will be infinite output.
- vim logstash_indexer.conf
- ######################################
- input{
- redis{
- host => "10.0.0.132"
- data_type => "list"
- key => "messages"
- password => "123456"
- db => 2
- }
- }
-
- output{
- if [type] == "messages" {
- elasticsearch{
- hosts => ["10.0.0.130"]
- index => "messages-%{+YYYY-MM-dd}"
- }
- }
- }
4, test
- cd /usr/share/logstash/bin/
- ./logstash --path.settings /etc/logstash/ -r /etc/logstash/conf.d/ --config.test_and_exit
- [root@l bin]# ./logstash --path.settings /etc/logstash/ -r /etc/logstash/conf.d/ --config.test_and_exit
- Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
- Configuration OK
5, start
- systemctl start logstash.service
- systemctl enable logstash.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| #login with a useroc login https://192.168.99.100:8443 -u developer -p developer#login as system adminoc login -u system:admin#User Informationoc whoami#View your configurationoc config view#Update the current context to have users login to the desired namespace:oc config set-context `oc config current-context` --namespace= |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| #Use specific templateoc new-app https://github.com/name/project --template=#New app from a different branchoc new-app --name=html-dev nginx:1.10~https://github.com/joe-speedboat/openshift.html.devops.git#mybranch#Create objects from a file:oc create -f myobject.yaml -n #Create or merge objects from fileoc apply -f myobject.yaml -n #Update existing objectoc patch svc mysvc --type merge --patch '{"spec":{"ports":[{"port": 8080, "targetPort": 5000 }]}}'#Monitor Pod statuswatch oc get pods#show labelsoc get pods --show-labels #Gather information on a project's pod deployment with node information $ oc get pods -o wide#Hide inactive Podsoc get pods --show-all=false#Display all resources oc get all,secret,configmap#Get the Openshift Console Addressoc get -n openshift-console route console#Get the Pod name from the Selector and rsh in itPOD=$(oc get pods -l app=myapp -o name)oc rsh -n $POD#exec single command in podoc exec $POD $COMMAND#Copy file from myrunning-pod-2 path in the current location oc rsync myrunning-pod-2:/tmp/LogginData_20180717220510.json .#Read resource schema dococ explain dc |
1
2
3
4
5
6
7
8
| #List available IS for openshift projectoc get is -n openshift#Import an image from an external registryoc import-image --from=registry.access.redhat.com/jboss-amq-6/amq62-openshift -n openshift jboss-amq-62:1.3 --confirm#List available IS and templatesoc new-app --list |
1
2
3
| oc create -f https://raw.githubusercontent.com/wildfly/wildfly-s2i/wf-18.0/imagestreams/wildfly-centos7.jsonoc new-app wildfly~https://github.com/fmarchioni/ocpdemos --context-dir=wildfly-basic --name=wildfly-basicoc expose svc/wildfly-basic |
1
2
3
4
5
6
7
8
9
| oc new-build --binary --name=mywildfly -l app=mywildflyoc patch bc/mywildfly -p '{"spec":{"strategy":{"dockerStrategy":{"dockerfilePath":"Dockerfile"}}}}' oc start-build mywildfly --from-dir=. --followoc new-app --image-stream=mywildfly oc expose svc/mywildfly |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| #Get Nodes litsoc get nodes#Check on which Node your Pods are runningoc get pods -o wide#Schedule an application to run on another Nodeoc patch dc myapp -p '{"spec":{"template":{"spec":{"nodeSelector":{"kubernetes.io/hostname": "ip-10-0-0-74.acme.compute.internal"}}}}}'#List all pods which are running on a Nodeoc adm manage-node node1.local --list-pods#Add a label to a Nodeoc label node node1.local mylabel=myvalue#Remove a label from a Nodeoc label node node1.local mylabel- |
1
2
3
4
5
6
7
8
9
10
| #create a PersistentVolumeClaim (+update the DeploymentConfig to include a PV + update the DeploymentConfig to attach a volumemount into the specified mount-path) oc set volume dc/file-uploader --add --name=my-shared-storage \-t pvc --claim-mode=ReadWriteMany --claim-size=1Gi \--claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs \--mount-path=/opt/app-root/src/uploaded \-n my-shared-storage#List storage classesoc -n openshift-storage get sc |
1
2
3
4
5
6
7
8
| #Manual build from source oc start-build ruby-ex#Stop a build that is in progress oc cancel-build #Changing the log level of a build:oc set env bc/my-build-name BUILD_LOGLEVEL=[1-5] |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| #Manual deployment $ oc rollout latest ruby-ex#Pause automatic deployment rolloutoc rollout pause dc $DEPLOYMENT# Resume automatic deployment rolloutoc rollout resume dc $DEPLOYMENT #Define resource requests and limits in DeploymentConfigoc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi#Define livenessProve and readinessProve in DeploymentConfigoc set probe dc/nginx --readiness --get-url=http://:8080/healthz --initial-delay-seconds=10oc set probe dc/nginx --liveness --get-url=http://:8080/healthz --initial-delay-seconds=10#Define Horizontal Pod Autoscaler (hpa)oc autoscale dc $DC_NAME --max=4 --cpu-percent=10 |
1
2
3
4
5
| #Create route $ oc expose service ruby-ex#Read the Route Host attributeoc get route my-route -o jsonpath --template="{.spec.host}" |
1
2
3
4
5
| #Make a service idle. When the service is next accessed will automatically boot up the pods again: $ oc idle ruby-ex#Read a Service IPoc get services rook-ceph-mon-a --template='{{.spec.clusterIP}}' |
1
2
3
4
5
6
7
8
9
10
11
12
13
| #Delete all resourcesoc delete all --all#Delete resources for one specific app$ oc delete services -l app=ruby-ex$ oc delete all -l app=ruby-ex#CleanUp old docker images on nodes#Keeping up to three tag revisions 1, and keeping resources (images, image streams and pods) younger than sixty minutes:oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m#Pruning every image that exceeds defined limits:oc adm prune images --prune-over-size-limit |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| #Check status of current project oc status#Get events for a projectoc get events --sort-by='{.lastTimestamp}'# get the logs of the myrunning-pod-2-fdthn pod oc logs myrunning-pod-2-fdthn# follow the logs of the myrunning-pod-2-fdthn pod oc logs -f myrunning-pod-2-fdthn# tail the logs of the myrunning-pod-2-fdthn pod oc logs myrunning-pod-2-fdthn --tail=50#Check the integrated Docker registry logs:oc logs docker-registry-n-{xxxxx} -n default | less#run cluster diagnosticsoc adm diagnostics |
1
2
3
4
5
| #Create a secret from the CLI and mount it as a volume to a deployment config:oc create secret generic oia-secret --from-literal=username=myuser --from-literal=password=mypasswordoc set volumes dc/myapp --add --name=secret-volume --mount-path=/opt/app-root/ --secret-name=oia-secret |
1
2
3
| oc adm policy add-role-to-user admin oia -n pythonoc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:monitoring:defaultoc adm policy add-scc-to-user anyuid -z default |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| #Manage node stateoc adm manage node false#List installed operatorsoc get csv#Export in a template the IS, BC, DC and SVCoc export is,bc,dc,svc --as-template=app.yaml#Show user in promptfunction ps1(){ export PS1='[\u@\h($(oc whoami -c 2>/dev/null|cut -d/ -f3,1)) \W]\$ '}#backup openshift objectsoc get all --all-namespaces --no-headers=true | awk '{print $1","$2}' | while read objdo NS=$(echo $obj | cut -d, -f1) OBJ=$(echo $obj | cut -d, -f2) FILE=$(echo $obj | sed 's/\//-/g;s/,/-/g') echo $NS $OBJ $FILE; oc export -n $NS $OBJ -o yaml > $FILE.ymldone |
I had this issue with MYSQL 5.7 . The following worked althoug...