ffmpeg -i ./some.mp4 -vcodec hevc_nvenc -b:v 5000k -r 30 ./nvenc.mp4
поиск
-
прочее
календарь
Декабрь 2025 Пн Вт Ср Чт Пт Сб Вс 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 архивы
ffmpeg -i ./some.mp4 -vcodec hevc_nvenc -b:v 5000k -r 30 ./nvenc.mp4
dism /online /cleanup-image /startcomponentcleanup /resetbase
вырезаем из всего выхлопа только фуллчейн и ключ. нормализуем.
cat ./response.json |jq .data[].fullchain |tr -d \" |sed -s 's/\\r\\n/\n/g' |sed 's/\\r\\n/\n/g' |sed 's/\\n-----BEGIN CERTIFICATE-----/-----BEGIN CERTIFICATE-----/' > ./cert.chain cat ./response.json |jq '.data[].privateKeyEncrypted.key' |tr -d \" |base64 -d > ./enc.private.pem
распароливаем
openssl rsa -in ./enc.private.pem -out ./private.pem Enter pass phrase for ./enc.private.pem: writing RSA key
или однострочником
cat ./response.json |jq .data[].root |tr -d \" |sed 's/\\r\\n/\n/g' > ./cert.chain && cat ./response.json |jq .data[].fullchain |tr -d \" |sed 's/\\r\\n/\n/g' |sed 's/\\n-----BEGIN CERTIFICATE-----/-----BEGIN CERTIFICATE-----/' >> ./cert.chain && cat ./response.json |jq '.data[].privateKeyEncrypted.key' |tr -d \" |base64 -d > ./enc.private.pem && openssl rsa -in ./enc.private.pem -out ./private.pem && cat ./private.pem >> ./cert.chain
ls -l /sys/block/sd* | sed 's/.*\(sd.*\) -.*\(ata.*\)\/h.*/\2 => \1/'
fastest way to test streams:
ffmpeg -i rtsp://<user>:<pass>@<ipaddress>:554/path ./output.mp4 (if terminal only) ffplay rtsp://<user>:<pass>@<ipaddress>:554/path (gui)
find paths on ispydb or in zm hcl
If you are new to security software, read:
https://wiki.zoneminder.com/Dummies_Guide
grafanadb=> select table_schema, table_name, column_name from information_schema.columns where column_name like '%email%'; table_schema | table_name | column_name --------------+------------+---------------- public | temp_user | email_sent public | temp_user | email_sent_on public | user | email_verified public | org | billing_email public | user | email public | team | email public | temp_user | email (7 rows) grafanadb=> select id, login, email, name, company from public.user where email like '%n.family%'; id | login | email | name | company ----+-------------------------+-------------------------+---------------------------+--------- 85 | n.family@domain.com | n.family@domain.com | Имя Фамилиев Отчествович | (1 row) grafanadb=> update public.user set email='n.family@domain.ru' where email like '%n.family%'; UPDATE 1
new go
1. sudo apt-get update 2. wget https://go.dev/dl/go1.21.0.linux-amd64.tar.gz 3. sudo tar -xvf go1.21.0.linux-amd64.tar.gz 4. sudo mv go /usr/local 5. export GOROOT=/usr/local/go 6. export GOPATH=$HOME/go 7. export PATH=$GOPATH/bin:$GOROOT/bin:$PATH 8. source ~/.profile
Ansible variable precedence
Source: http://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
From least to most important
command line values (for example, -u my_user, these are not variables)
role defaults (defined in role/defaults/main.yml)
inventory file or script group vars
inventory group_vars/all
playbook group_vars/all
inventory group_vars/*
playbook group_vars/*
inventory file or script host vars
inventory host_vars/*
playbook host_vars/*
host facts / cached set_facts
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
include_vars
set_facts / registered vars
role (and include_role) params
include params
extra vars (always win precedence)
docker pull grafana/grafana docker stop grafana docker rm grafana docker run -d --name=grafana -p 3000:3000 --restart=always -v /var/lib/grafana:/var/lib/grafana grafana/grafana
meshtastic docker run
docker run -d -p 8081:8080 --restart always --name Meshtastic-Web ghcr.io/meshtastic/web
systemctl disable snapd.service systemctl disable snapd.socket systemctl disable snapd.seeded.service systemctl mask snapd.service snap list snap remove lxd snap remove core20 snap remove snapd apt autoremove --purge snapd
board info
ipmitool -I lanplus -H IPSRV -U admin -P PASSWORD fru
ipmi reset
ipmitool -I lanplus -H IPSRV -U admin -P PASSWORD mc reset cold
\d [NAME] describe table, index, sequence, or view
\d{t|i|s|v|S} [PATTERN] (add "+" for more detail)
list tables/indexes/sequences/views/system tables
\da [PATTERN] list aggregate functions
\db [PATTERN] list tablespaces (add "+" for more detail)
\dc [PATTERN] list conversions
\dC list casts
\dd [PATTERN] show comment for object
\dD [PATTERN] list domains
\df [PATTERN] list functions (add "+" for more detail)
\dg [PATTERN] list groups
\dn [PATTERN] list schemas (add "+" for more detail)
\do [NAME] list operators
\dl list large objects, same as \lo_list
\dp [PATTERN] list table, view, and sequence access privileges
\dT [PATTERN] list data types (add "+" for more detail)
\du [PATTERN] list users
\l list all databases (add "+" for more detail)
\z [PATTERN] list table, view, and sequence access privileges (same as \dp)
search path
SET search_path TO myschema,public; SHOW search_path;
file.mib to file.txt
for i in *.mib; do mv $i `basename $i mib`txt; done
список хостов:
cat ./list host1 host2
команда
for i in `cat ./list`; do host $i |cut -d ' ' -f 4; done
ubuntu
apt-cache policy gitlab-ce
apt install gitlab-ce=13.12.9-ce.0
rhel
yum --showduplicates list gitlab-ce
yum install gitlab-ce-15.3.3-ce.0.el7
https://wiki.openjdk.java.net/display/shenandoah/Main
Basic configuration
Basic configuration and command line options:
-Xlog:gc (since JDK 9) or -verbose:gc (up to JDK 8) would print the individual GC timings.
-Xlog:gc+ergo (since JDK 9) or -XX:+PrintGCDetails (up to JDK 8) or would print the heuristics decisions, which might shed light on outliers, if any.
-Xlog:gc+stats (since JDK 9) or -verbose:gc (up to JDK 8) would print the summary table on Shenandoah internal timings at the end of the run.
It is almost always a good idea to run with logging enabled. This summary table conveys important information about GC performance, and we would almost inevitably ask for one in a performance bug report. Heuristics logs are useful to figure out GC outliers.
Other recommended JVM options are:
-XX:+AlwaysPreTouch: committing heap pages into memory helps to reduce latency hiccups
-Xms and -Xmx: making the heap non-resizeable with -Xms = -Xmx reduces hiccups with heap management. Coupled with AlwaysPreTouch, the -Xms = -Xmx would commit all memory on startup, which avoids hiccups when memory is finally used. -Xms also defines the low boundary for memory uncommit, so with -Xms = -Xmx all memory would stay committed. That said, if you want to configure Shenandoah for lower footprint, then setting lower -Xms is recommended. You need to decide how low to set it to balance the commit/uncommit overhead vs memory footprint. In many cases, setting -Xms arbitrarily low would be fine.
Using large pages greatly improves performance on large heaps. There are two ways to opt-in. -XX:+UseLargePages would enable hugetlbfs (Linux) or Windows (with appropriate privileges) support. -XX:+UseTransparentHugePages would enable it transparently. With transparent huge pages, it is recommended to set /sys/kernel/mm/transparent_hugepage/enabled and /sys/kernel/mm/transparent_hugepage/defrag to «madvise». When running with AlwaysPreTouch, it will also pay the defrag costs upfront at startup.
-XX:+UseNUMA: while Shenandoah does not support NUMA explicitly yet, it is a good idea to enable this to enable NUMA interleaving on multi-socket hosts. Coupled with AlwaysPreTouch, it provides better performance than the default out-of-the-box configuration
-XX:-UseBiasedLocking: there is a tradeoff between uncontended (biased) locking throughput, and the safepoints JVM does to enable and disable them as needed. For latency-oriented workloads, it makes sense to turn biased locking off.
-XX:+DisableExplicitGC: invoking System.gc() from user code forces Shenandoah to perform additional GC cycle; it might be profitable to disable this to protect from the code that abuses System.gc(). It usually does not hurt, as -XX:+ExplicitGCInvokesConcurrent gets enabled by default, which means the concurrent GC cycle would be invoked, not the STW Full GC.
Heuristics
transport.type: netty3
http.type: netty3
# Recover only after the given number of nodes have joined the cluster. Can be seen as "minimum number of nodes to attempt recovery at all".
gateway.recover_after_nodes: 8
# Time to wait for additional nodes after recover_after_nodes is met.
gateway.recover_after_time: 5m
# Inform ElasticSearch how many nodes form a full cluster. If this number is met, start up immediately.
gateway.expected_nodes: 10
yum install java-latest-openjdk vim /etc/sysconfig/elasticsearch JAVA_HOME=/usr/lib/jvm/java-13-openjdk-13.0.1.9-2.rolling.el7.x86_64 vim /etc/elasticsearch/jvm.options #13 java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC #-XX:+UseConcMarkSweepGC #-XX:CMSInitiatingOccupancyFraction=75 #-XX:+UseCMSInitiatingOccupancyOnly
https://click.ru/jiufs
-XX:MaxDirectMemorySize=16g
-Djdk.nio.maxCachedBufferSize=262144
mmapfs -> niofs -> hybridfs
jdk13
-XX:+UnlockExperimentalVMOoptons
-XX:+ShenandoahGC
thread_pool.write.queue_size 200 -> 300
http.max_content_length = 500mb
es6 — use_adaptive_replica_selection: true
curl -s -XPOST -H "Content-Type: application/json" 'http://localhost:9200/_license/start_basic?acknowledge=true'
add snipet section
<Extension charconv>
Module xm_charconv
AutodetectCharsets windows-1251, utf-8
</Extension>
and add to any log inputs — verbatim configuration
Exec convert_fields("auto", "utf-8");
reboot echo 1 > /proc/sys/kernel/sysrq echo b > /proc/sysrq-trigger halt echo 1 > /proc/sys/kernel/sysrq echo o > /proc/sysrq-trigger keep SysRq enabled all the time kernel.sysrq = 1
nvme format /dev/nvme1 -n 1 -r parted -s -a optimal /dev/nvme1n1 mklabel GPT mkpart primary xfs 0% 100% name 1 'nvme01' mkfs.xfs /dev/nvme1n1p1
apt install letsencrypt
stop nginx/apache
certbot certonly --standalone -d mydomain.tld
from OS
racadm jobqueue delete -i JID_CLEARALL_FORCE
and reset iDRAC
it worked
прогнать диски
создаем в _текущем_ каталоге файлы для тестов. 10 штук. суммарным размером 100 гигов. про блок — не уверен, требует проверки.
sysbench --file-total-size=100G --file-block-size=4K --file-num=10 fileio prepare
и пускаем 32 потока, fsync после каждого записанного блока. тест случайных записи и чтения. в подготовленные 10 файлов, общим размером в 100Гигов. длинна теста секундах, у нас — 2 часа.
sysbench --file-total-size=100G --file-test-mode=rndrw --time=7200 --file-block-size=4K --file-num=10 --threads=32 --file-fsync-all=on fileio run
прогнать процессор в 72 потока, на 2 часа.
sysbench --time=7200 --threads=72 cpu run
parted -s -a optimal /dev/nvme2n1 mklabel GPT print mkpart primary xfs 0% 100% name 1 'nvme03' print
create a new index in graylog
shutdown graylog
remove the new index (e.g. graylog_8)
rename graylog_deflector to fit the name of the new index
add an alias graylog_deflector
The solution based above:
In the Graylog web UI go to the System/Indices>Indices. Select the Default index set
In the Maintanance select the Rotate active write index. It will create a graylog_0 index (but it will not work)
Go to the console and stop the graylog:
sudo service graylog-server stop
Handle the 1000 field problem:
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.mapping.total_fields.limit" : "5000"
}'
Stop the graylog_deflector index:
curl -XPOST 'localhost:9200/graylog_deflector/_close?pretty'
Delete the graylog_deflector index:
curl -XDELETE 'localhost:9200/graylog_deflector?pretty'
Add the graylog_deflector as alias to the newly created graylog_0 index:
curl -XPOST 'localhost:9200/_aliases?pretty' -H 'Content-Type: application/json' -d'
{
"actions" : [
{ "add" : { "index" : "graylog_0", "alias" : "graylog_deflector" } }
]
}'
Restart graylog