[root@N1 ~]# netstat -an | grep ESTAB
udp 0 0 192.168.1.120:35570 212.47.249.141:123 ESTABLISHED
udp 0 0 192.168.1.120:55589 108.59.2.24:123 ESTABLISHED
如果需要取出源IP的话,一般大家会这样做,即做两次awk操作[root@N1 ~]# netstat -an | grep ESTAB | awk '{print $5}' | awk -F: '{print $1}'
108.59.2.24
212.47.249.141
其实呢,通过在awk中指定两个分隔符(空格和:),即可一次性的提取出源IP地址,在awk中支持多个分隔符的写法如下:[root@N1 ~]# netstat -an | grep ESTAB | awk -F '[ :]+' '{print $6}'
108.59.2.24
212.47.249.141
#多一个加号表明将连续出现的分隔符当做一个来处理LM-SHC-16507744:Desktop yanwxu$ cat testd
123#ruby#3#abc
456#rechel#25#def
789#wang#30#ghi
LM-SHC-16507744:Desktop yanwxu$ awk -F# '{print $1,$2}' testd
123 ruby
456 rechel
789 wang
LM-SHC-16507744:Desktop yanwxu$ awk -v FS='#' '{print $1,$2}' testd
123 ruby
456 rechel
789 wang
LM-SHC-16507744:Desktop yanwxu$ awk -v FS='#' OFS='+++++' '{print $1,$2}' testd
awk: syntax error at source line 1
context is
>>> OFS=++++ <<<
awk: bailing out at source line 1
LM-SHC-16507744:Desktop yanwxu$ awk -v FS='#' -v OFS='+++++' '{print $1,$2}' testd
123+++++ruby
456+++++rechel
789+++++wang
LM-SHC-16507744:Desktop yanwxu$ awk -F# '{print $1,$2}' testd
123 ruby
456 rechel
789 wang
LM-SHC-16507744:Desktop yanwxu$ awk -F# '{print $1 $2}' testd
123ruby
456rechel
789wang
dict = {"name": "zs", "age": 18, "city": "深圳", "tel": "1362626627"}
res = sorted(dict.items(), key=lambda x: x[0], reverse=False)
print(res)
new_dict = {}
for x in res:
new_dict[x[0]] = x[1]
print(new_dict)
2. 字典排序方式二foo = [{"name": "zs", "age": 19}, {"name": "ll", "age": 54},
{"name": "wa", "age": 17}, {"name": "df", "age": 23}]
res = sorted(foo, key=lambda x: x["name"], reverse=False)
print(res)
3.元组排序foo = [("zs", 19), ("ab", 2), ("t", 8)]
res = sorted(foo, key=lambda x: x[0], reverse=False)
print(res)
valid_columns = set(User.__table__.columns.keys()) # 获取model中定义的columns
这样就能获取到合法的schema字段,然后我们把数据源的数据过滤:valid_attrs = {k: v for k, v in i.items() if k in valid_columns}
如果其中有需要单独处理的字段,我们进行特殊处理:adjust_time(valid_attrs)
def adjust_time(attrs):
# mongo存储的是毫秒时间戳
attrs["created_at"] = datetime.datetime.fromtimestamp(attrs["created_at"] / 1000.0)
attrs["updated_at"] = datetime.datetime.fromtimestamp(attrs["updated_at"] / 1000.0)
if attrs.get("deleted_at"):
attrs["deleted_at"] = datetime.datetime.fromtimestamp(attrs["deleted_at"] / 1000.0)
之后我们就可以愉快的对SQLAlchemy对象进行更新或者插入了:already_exist = User.get_by_id(s, i["user_id"])
if already_exist:
logging.info("update item: %s", i)
User.update_by_user_id(s, already_exist.user_id, valid_attrs)
else:
logging.info("insert item: %s", i)
s.add(User(**valid_attrs))
其中,model中的 update_by_user_id定义如下:@classmethod
def update_by_user_id(cls, session, user_id, attr_map):
session.query(cls).filter(cls.user_id == user_id).update(attr_map)
搞定! def save_app_info(self):
try:
# update app_info
print(self.dicts)
data = db_session.query(App_Info).filter_by(app_id=self.app_id,mall_name=self.mall_name).first()
if data:
{setattr(data, k, v) for k,v in dicts.items()}
print(data)
else:
# insert app_info
db_session.execute(App_Info.__table__.insert(), self.dicts)
db_session.commit()
except:
db_session.rollback()
other.error("save app_data is error,here are details:{}".format(traceback.format_exc()))
finally:
db_session.close()
msg_count = db.session.query(sqlalchemy.func.count(SMS_Receive.id))\
.filter(and_(SMS_Receive.IsShow == True, SMS_Receive.PhoneNumber_id == number))\
.scalar()
sms_count_info = SMSCount(PhoneNumber_id=number, SMS_Count=msg_count)
db.session.add(sms_count_info)
db.session.commit()
之后是update代码:get_sms_count = SMSCount.query.filter_by(PhoneNumber_id=number).first()
get_sms_count.SMS_Count+=1
get_sms_count.PhoneNumber_id=number
db.session.commit()
可以看到在insert里面先加入到session里面才提交,而insert则是先查询出来之后更新才提交。admin = User.query.filter_by(username='admin').first()
admin.email = 'my_new_email@example.com'
db.session.commit()
user = User.query.get(5)
user.name = 'New Name'
db.session.commit()
yum install epel-release
CentOS Linux 8 - AppStream 23 B/s | 38 B 00:01
Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist
[baseos]
name=CentOS Linux $releasever - BaseOS
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=BaseOS&infra=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/
baseurl=https://vault.centos.org/centos/$releasever/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
[appstream]
name=CentOS Linux $releasever - AppStream
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=AppStream&infra=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/AppStream/$basearch/os/
baseurl=https://vault.centos.org/centos/$releasever/AppStream/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
$ sudo apt remove tracker tracker-extract tracker-miner-fs
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
gir1.2-clutter-gst-3.0 gir1.2-evince-3.0 libgsf-1-114 libgsf-1-common libmusicbrainz5-2
libtagc0
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
gnome-shell-extension-desktop-icons gnome-sushi insync-nautilus nautilus
nautilus-mediainfo nautilus-share tracker tracker-extract tracker-miner-fs ubuntu-desktop
ubuntu-desktop-minimal
0 upgraded, 0 newly installed, 11 to remove and 0 not upgraded.
After this operation, 5,517 kB disk space will be freed.
Do you want to continue? [Y/n]
因此,尝试在 Ubuntu 19.04 上删除 Tracker 会导致 Nautilus、桌面图标扩展和 ubuntu-desktop 元包被删除。 在 Fedora 另一方面,默认安装了更多的 Gnome 软件,删除 Tracker 也想删除 Gnome Boxes、Documents、Photos 和 Totem,以及其他 134 个包。tracker status
声称它的索引中有超过 100000 个文件,并且它目前正在索引文件。 但是如果您愿意,您可以尝试一下,看看它是否对您的系统有任何影响。systemctl --user mask tracker-store.service tracker-miner-fs.service tracker-miner-rss.service tracker-extract.service tracker-miner-apps.service tracker-writeback.service
对于追踪器 3:systemctl --user mask tracker-extract-3.service tracker-miner-fs-3.service tracker-miner-rss-3.service tracker-writeback-3.service tracker-xdg-portal-3.service tracker-miner-fs-control-3.service
在此之后,重置跟踪器:tracker reset --hard
对于追踪器 3:tracker3 reset -s -r
并重新启动。systemctl --user unmask tracker-store.service tracker-miner-fs.service tracker-miner-rss.service tracker-extract.service tracker-miner-apps.service tracker-writeback.service
对于追踪器 3:systemctl --user unmask tracker-extract-3.service tracker-miner-fs-3.service tracker-miner-rss-3.service tracker-writeback-3.service tracker-xdg-portal-3.service tracker-miner-fs-control-3.service
并在此之后重新启动您的系统.crontab [ -u user ] file
或crontab [ -u user ] { -l | -r | -e }
说明:* * * * *
- - - - -
| | | | |
| | | | +----- 星期中星期几 (0 - 6) (星期天 为0)
| | | +---------- 月份 (1 - 12)
| | +--------------- 一个月中的第几天 (1 - 31)
| +-------------------- 小时 (0 - 23)
+------------------------- 分钟 (0 - 59)
使用者也可以将所有的设定先存放在文件中,用 crontab file 的方式来设定执行时间。* * * * * /bin/ls
在 12 月内, 每天的早上 6 点到 12 点,每隔 3 个小时 0 分钟执行一次 /usr/bin/backup:0 6-12/3 * 12 * /usr/bin/backup
周一到周五每天下午 5:00 寄一封信给 alex@domain.name:0 17 * * 1-5 mail -s "hi" alex@domain.name < /tmp/maildata
每月每天的午夜 0 点 20 分, 2 点 20 分, 4 点 20 分....执行 echo "haha":20 0-23/2 * * * echo "haha"
下面再看看几个具体的例子:0 */2 * * * /sbin/service httpd restart 意思是每两个小时重启一次apache
50 7 * * * /sbin/service sshd start 意思是每天7:50开启ssh服务
50 22 * * * /sbin/service sshd stop 意思是每天22:50关闭ssh服务
0 0 1,15 * * fsck /home 每月1号和15号检查/home 磁盘
1 * * * * /home/bruce/backup 每小时的第一分执行 /home/bruce/backup这个文件
00 03 * * 1-5 find /home "*.xxx" -mtime +4 -exec rm {} \; 每周一至周五3点钟,在目录/home中,查找文件名为*.xxx的文件,并删除4天前的文件。
30 6 */10 * * ls 意思是每月的1、11、21、31日是的6:30执行一次ls命令
注意:当程序在你所指定的时间执行后,系统会发一封邮件给当前的用户,显示该程序执行的内容,若是你不希望收到这样的邮件,请在每一行空一格之后加上 > /dev/null 2>&1 即可,如:20 03 * * * . /etc/profile;/bin/sh /var/www/runoob/test.sh > /dev/null 2>&1
脚本无法执行问题#!/bin/sh
. /etc/profile
. ~/.bash_profile
3、在 /etc/crontab 中添加环境变量,在可执行命令之前添加命令 . /etc/profile;/bin/sh,使得环境变量生效,例如:20 03 * * * . /etc/profile;/bin/sh /var/www/runoob/test.sh