0%

需求

​ 这是一个撞库事件的后续, 通过之前编写的脚本Suricata - login_audit脚本成功审计到了所有登录网站的账号。这里需要对经过分析后存在可疑行为的账号进行反向查询, 主要判断该账号是否已被标记为泄露账号。

坑点

​ 由于Scrapy没有JS Eengine只能爬取静态页面的, 对于JS生成的动态页面是不支持的。但是可以借助Scrapy-Splash来实现动态页面的爬取。

部署方法

1. Scrapy-Splash

1
$ pip install scrapy-splash --user

2. Splash Instance

由于Scrapy-Splash使用的是Splash HTTP API, 所以需要一个**Splash Instance,一般采用Docker运行Splash**。

1
2
3
4
5
6
7
8
9
10
11
12
13
$ more docker-compose.yml
version: "2.0"

services:
splash:
restart: always
image: scrapinghub/splash
tty: true
ports:
- "8050:8050"
network_mode: "bridge"
container_name: "Splash"
hostname: "Splash"

3. 配置Splash服务(以下操作全部在settings.py

3.1 添加Splash服务器地址

1
SPLASH_URL = 'http://localhost:8050'

3.2 启用Splash middleware

1
2
3
4
5
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

Order 723 is just before HttpProxyMiddleware (750) in default scrapy settings.

3.3 启用SplashDeduplicateArgsMiddleware

1
2
3
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

3.4 自定义 DUPEFILTER_CLASS

1
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'

3.5 使用Scrapy HTTP缓存

1
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

4. 代码

注: 当使用Scrapy-Splash之后, 将无法直接使用crawlera middleware。需要手动引用外部lua脚本。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# -*- coding: utf-8 -*-
import scrapy
from haveibeenpwned.items import feed

import re
import json
import pandas as pd
from scrapy_splash import SplashRequest


"""
from redis crawl haveibeenpwned
"""

LUA_SOURCE = """
function main(splash)
local host = "proxy.crawlera.com"
local port = 8010
local user = "api_key"
local password = ""
local session_header = "X-Crawlera-Session"
local session_id = "create"

splash:on_request(function (request)
request:set_header("X-Crawlera-UA", "desktop")
request:set_header(session_header, session_id)
request:set_proxy{host, port, username=user, password=password}
end)

splash:on_response_headers(function (response)
if response.headers[session_header] ~= nil then
session_id = response.headers[session_header]
end
end)

splash:go(splash.args.url)
return splash:html()
end
"""

class CheckSpider(scrapy.Spider):
name = 'scrapy_demo'
start_urls = 'https://httpbin.org/get'

def start_requests(self):
yield SplashRequest(self.start_urls, self.parse, endpoint='execute', args={'wait': 3, 'lua_source': LUA_SOURCE})

def parse(self, response):
print(response.text)

参考:

参考:

​ 自从AWS在6月份新出了Traffic Mirroring功能后, 我也算是第一时间使用了这个功能。与传统的交换机流量镜像不同的是, AWS上是将流量镜像后的数据通过VXLAN协议发送至流量分析引擎(SuricataZeek)。正是由于这一点让我碰到了以下几个问题, 这里写出来希望对大家有所帮助。

  1. 接收流量镜像的目标端, 也就是我们常说的流量分析引擎端是有接收限制的。

​ 如果你是在一个非专用实例上部署的Suricata、Zeek。那么你只能最多同时接收10个源的流量镜像, 也就是你只能接收10个网卡的数据。很不巧的是我们碰上了这个问题, 解决方案也很简单, 使用专用实例(Dedicated instance) 或者 使用AWS的网络负载均衡(Network Load Balancer)。前者可以将Limit提升到100, 后者将不受限。如图:

image-20191016164935291


  1. C5与C4实例的差异

​ 就我目前使用的实例而言, 分别测试过C5 与 C4两种实例。他们的网卡驱动有所区别的, 如图:

image-20191016160923628

​ 我这使用下来最直观的区别, 在C4实例上若不使用PF_RING捕包模式的话, Suricata丢包率感人, 0.5 Gbps就开始丢包。测试的机器配置: 32C 60G的机器, 切换到PF_RING捕包模式无此问题。反之C5实例就不存在这个问题, AF-PACKET直接上到2 Gbps的纯HTTP流量都没有丢包。硬件配置: 16C 32G的机器。


  1. 超过MTU: 9001数据包被截断

​ 这几天在排查安全事件时, 发现监控的同一台Nginx上解析出的流量(HTTP事件)与日志(HTTP事件)数量相差较大。经过两天的排查终于定位了问题, Suricata kernel 并没有丢包, 所以怀疑是不是Suricata HTTP解析出错导致。最终通过抓包发现导致该问题的”罪魁祸首”就是VXLAN, 由于AWS在流量镜像时采用了VXLAN协议进行封装, 导致在原有MTU的基础上增加了50个字节, 造成数据包被截断, 无法还原出HTTP事件。以下截图就是一个无法被正确还原HTTP事件的数据包, 我用Suricata载入数据包后, 只还原出了长度9015数据包之前的HTTP信息, 长度9015数据包之后的所有事件均无法被还原。

image-20191014095249725

image-20191014094741753

官方描述:

​ For example, if an 8996 byte packet is mirrored, and the traffic mirror target MTU value is 9001 bytes, the mirror encapsulation results in the mirrored packet being greater than the MTU value. In this case, the mirror packet is truncated. To prevent mirror packets from being truncated, set the traffic mirror source interface MTU value to 54 bytes less than the traffic mirror target MTU value. For more information about configuring the network MTU value, see Network Maximum Transmission Unit (MTU) for Your EC2 Instance in the Amazon EC2 User Guide for Linux Instances.

​ 例如,如果对8996字节的数据包进行了镜像,并且流量镜像目标MTU值为9001字节,则镜像封装会导致镜像的数据包大于MTU值。在这种情况下,镜像数据包将被截断。为防止镜像数据包被截断,请将流量镜像源接口的MTU值设置为比流量镜像目标MTU值小54个字节。有关配置网络MTU值的更多信息,请参阅Amazon EC2 Linux实例用户指南中的EC2实例的网络最大传输单位(MTU)。


关于VXLAN导致Suricata无法正常解析数据的问题, 特地进行了测试:

准备工作:

  • 新建了test_files文件, 该文件只包含内容’hello, world!‘;
  • 为了使数据包在传输时满足MTU: 9001, 手动生成了一个10MB空文件10mb_exist_files.

共计访问: 14次

访问顺序:

  1. Client -> Web Server -> test_files 3 (次)
  2. Client -> Web Server -> 10mb_exist_files 1 (次)
  3. Client -> Web Server -> test_files 10 (次)

正常情况:

  • Client -> Web Server -> test_files 3 (次) - 正常
  • Client -> Web Server -> 10mb_exist_files 1 (次) - 正常
  • Client -> Web Server -> test_files 10 (次) - 正常

异常情况:

  • Client -> Web Server -> test_files 3 (次) - 正常
  • Client -> Web Server -> 10mb_exist_files 1 (次) - 异常
  • Client -> Web Server -> test_files 10 (次) - 丢失

MTU: 9001

  • 非镜像流量的数据包详情:

可以看到从数据包20到数据包6126之间都是在进行10MB文件(10mb_exist_files)的传输过程。

image-20191016103409831


从http数据包中可以看出, 这里请求包与响应包都可以正常被还原出来。

image-20191016104502919


数据包在Suricata上的解析结果:

1
2
$ cat http-2019-10-16.json | wc -l
14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"timestamp": "2019-10-15T22:52:10.180505+0800",
"flow_id": 415026399241324,
"pcap_cnt": 6127,
"event_type": "http",
"src_ip": "y.y.y.y",
"src_port": 43418,
"dest_ip": "x.x.x.x",
"dest_port": 8000,
"proto": "TCP",
"tx_id": 3,
"http": {
"hostname": "x.x.x.x",
"http_port": 8000,
"url": "/file/10mb_exist_files",
"http_user_agent": "python-requests/1.2.3 CPython/2.7.16 Linux/4.14.123-86.109.amzn1.x86_64",
"http_content_type": "text/html",
"accept": "*/*",
"accept_encoding": "gzip, deflate, compress",
"content_length": "41943044",
"content_type": "text/html; charset=utf-8",
"date": "Tue, 15 Oct 2019 14:52:10 GMT",
"server": "Werkzeug/0.16.0 Python/2.7.16",
"http_method": "GET",
"protocol": "HTTP/1.1",
"status": 200,
"length": 41943044
}
}

  • 镜像流量的数据包详情:(也就是VXLAN封装后的数据包)

同样可以看到从数据包26到数据包6170之间都是在进行10MB文件(10mb_exist_files)的传输过程。但是在第34个数据包中可以看到, 已经标注了数据超出了最大长度。

image-20191016105636906


超出长度的数据将会被截断, 标注: 50 bytes missing in capture file

image-20191016150322620

从HTTP数据包中可以看出, 这里只有请求包, 由于后续的响应包超出了MTU: 9001, 因此并没有响应包。

image-20191016105948606


数据包在Suricata上的解析结果:

1
2
$ cat http-2019-10-16.json | wc -l
4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"timestamp": "2019-10-15T22:52:48.295823+0800",
"flow_id": 1746715371266426,
"event_type": "http",
"src_ip": "y.y.y.y",
"src_port": 43420,
"dest_ip": "x.x.x.x",
"dest_port": 8000,
"proto": "TCP",
"tx_id": 3,
"http": {
"hostname": "x.x.x.x",
"http_port": 8000,
"url": "/file/10mb_exist_files",
"http_user_agent": "python-requests/1.2.3 CPython/2.7.16 Linux/4.14.123-86.109.amzn1.x86_64",
"accept": "*/*",
"accept_encoding": "gzip, deflate, compress",
"http_method": "GET",
"protocol": "HTTP/1.1",
"status": 200,
"length": 0
}
}

结论:

​ 相比非镜像流量的数据包, Suricata 少了后续10条http请求的数据解析。针对访问10mb_exist_files的请求, 由于超过了MTU, 数据包被阶段了, http的解析数据也是不完整的。

解决方案:

​ 这里直接引用AWS的官方文档描述。如果对8996字节的数据包进行了镜像,并且流量镜像目标MTU值为9001字节,则镜像封装会导致镜像的数据包大于MTU值。在这种情况下,镜像数据包将被截断。为防止镜像数据包被截断,请将流量镜像源接口的MTU值设置为比流量镜像目标MTU值小54个字节。有关配置网络MTU值的更多信息,请参阅Amazon EC2 Linux实例用户指南中的EC2实例的网络最大传输单位(MTU)。

​ 一般来说,降低 MTU 的话,有可能发现网路传输效能有下降,这是因为每个封包 size 变小,所以传送同样的资料量,封包数就会变多,造成 overhead 变多。但是对于传输是不会产生错误的状况的。


MTU:1500

1
2
3
4
5
6
7
8
9
$ ip link show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 02:8a:2d:87:02:8e brd ff:ff:ff:ff:ff:ff

$ sudo ip link set dev eth0 mtu 1500

$ ip link show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 02:8a:2d:87:02:8e brd ff:ff:ff:ff:ff:ff

image-20191016154823748

image-20191016155112376

数据包在Suricata上的解析结果:

1
2
$ cat http-2019-10-16.json | wc -l
14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
"timestamp": "2019-10-16T15:14:15.576656+0800",
"flow_id": 1135596924232203,
"event_type": "http",
"src_ip": "y.y.y.y",
"src_port": 43554,
"dest_ip": "x.x.x.x",
"dest_port": 8000,
"proto": "TCP",
"tx_id": 3,
"http": {
"hostname": "x.x.x.x",
"http_port": 8000,
"url": "/file/10mb_exist_files",
"http_user_agent": "python-requests/1.2.3 CPython/2.7.16 Linux/4.14.123-86.109.amzn1.x86_64",
"http_content_type": "text/html",
"accept": "*/*",
"accept_encoding": "gzip, deflate, compress\n",
"content_length": "41943044",
"content_type": "text/html; charset=utf-8",
"date": "Wed, 16 Oct 2019 07:14:15 GMT",
"server": "Werkzeug/0.16.0 Python/2.7.16",
"http_method": "GET",
"protocol": "HTTP/1.1",
"status": 200,
"length": 41943044
}
}

​ 结论: MTU减小到1500时, 无论从WireShark来查看或者Suricata协议还原的角度来说, 都是可以的。


​ 写在最后, 说实话AWS提供了在云上的流量镜像确实很不错, 至少比传统在云上每一台机器通过安装agent把流量外发的形式强, 似乎国内的云厂商现在也没有这个功能!

​ 不过通过VXLAN将数据包封装后导致MTU超过最大值这问题也确实有点坑。你让已经成型的架构去调整MTU值, 虽然理论上是可行, 但实际生产环境中网络的调整都是比较慎重的, 除非企业现在必须得上流量镜像, 否则不太能说服运维的小伙伴去调整, 要是真出了什么问题, 都是大问题。

参考

背景

​ 由于近期网站遭受大量撞库攻击, 想监控一下登录接口中的异常行为。通过alert并不能起到很好的效果, 所以采用Lua脚本进行扩展, 写了一个审计登录接口的脚本。通过登录接口事件, 再配合Wazuh进行一些简单的关联告警, 效果还是不错的。

需求

  • 由于审计内容涉及到用户敏感信息, 因此需要对敏感信息进行Hash之后再输出;
  • 除通用HTTP Header字段外支持对自定义字段进行选择性输出;

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
json = require "json"
md5 = require "md5"

-- 登录接口
login_url = "/login"
-- 登录状态码
success_code = 0
-- event_name
event_name = "login_audit"
-- event_type
event_type = "lua"
-- app_type
app_type = "web"
-- logs
name = "web_login_audit.json"

-- common_mapping
http_common_mapping = '{"accept":"accept","accept-charset":"accept_charset","accept-encoding":"accept_encoding","accept-language":"accept_language","accept-datetime":"accept_datetime","authorization":"authorization","cache-control":"cache_control","from":"from","max-forwards":"max_forwards","origin":"origin","pragma":"pragma","proxy-authorization":"proxy_authorization","via":"via","vary":"vary","x-requested-with":"x_requested_with","x-forwarded-proto":"x_forwarded_proto","accept-range":"accept_range","allow":"allow","connection":"connection","content-encoding":"content_encoding","content-language":"content_language","content-location":"content_location","content-md5":"content_md5","content-range":"content_range","date":"date","last-modified":"last_modified","location":"location","proxy-authenticate":"proxy_authenticate","referrer":"refer","retry-after":"retry_after","server":"server","transfer-encoding":"transfer_encoding","upgrade":"upgrade","www-authenticate":"www_authenticate","x-authenticated-user":"x_authenticated_user","user-agent":"user_agent"}'
common_mapping_table = json.decode(http_common_mapping)

-- request_mapping
http_request_mapping = '{"content-length":"request_content_length","content-type":"request_content_type"}'
request_mapping_table = json.decode(http_request_mapping)

-- response_mapping
http_response_mapping = '{"content-length":"response_content_length","content-type":"response_content_type"}'
response_mapping_table = json.decode(http_response_mapping)

-- defind functioin
function md5Encode(args)
m = md5.new()
m:update(args)
return md5.tohex(m:finish())
end

function urlDecode(args)
s = string.gsub(args, "%%(%x%x)", function(h) return string.char(tonumber(h, 16)) end)
return s
end

-- email=xxx@yyy.com&password=zzz
function formatStr(args)
t = {}
data = string.split(args, '&')
for n, v in ipairs(data) do
d = string.split(v, '=')
t[d[1]] = d[2]
end
return t
end

function string.split(s, p)
local rt = {}
string.gsub(s, '[^'..p..']+', function(w) table.insert(rt, w) end )
return rt
end

function trim(s)
return (string.gsub(s, "^%s*(.-)%s*$", "%1"))
end

function formatCookie(args)
t = {}
data = string.split(args, ";")
for n, v in ipairs(data) do
v = trim(v)
d = string.split(v, "=")
t[d[1]] = d[2]
end
return t
end

-- default function
function init (args)
local needs = {}
needs["protocol"] = "http"
return needs
end


function setup (args)
filename = SCLogPath() .. "/" .. name
file = assert(io.open(filename, "a"))
SCLogInfo("Web Login Log Filename " .. filename)
http = 0
end


function log(args)

-- init tables
http_table = {}

-- http_hostname
http_hostname = HttpGetRequestHost()
http_table["hostname"] = http_hostname

-- http_url
http_url = HttpGetRequestUriNormalized()
if http_url ~= login_url then
return
end
http_table["url"] = login_url

-- http_url_path
http_table["url_path"] = http_url

-- http_method
rl = HttpGetRequestLine()
http_method = string.match(rl, "%w+")
http_table["method"] = http_method

-- http_status
rsl = HttpGetResponseLine()
status_code = string.match(rsl, "%s(%d+)%s")
http_status = tonumber(status_code)
http_table["status"] = http_status

-- http_protocol
http_protocol = string.match(rsl, "(.-)%s")
http_table["protocol"] = http_protocol

-- request_token
cookie = HttpGetRequestHeader("Cookie")
if cookie ~= nil then
session_id = string.match(cookie, "sessionID=(.-);")
if session_id ~= nil then
http_table["request_token"] = md5Encode(session_id)
else
http_table["request_token"] = nil
end
end

-- response_token && member_id
set_cookie = HttpGetResponseHeader("Set-Cookie")
if set_cookie ~= nil then
session_id = string.match(set_cookie, "sessionID=(.-);")
http_table["response_token"] = md5Encode(session_id)
member_id = string.match(set_cookie, "memberId=(.-);")
http_table["member_id"] = tonumber(member_id)
end

-- login_results
a, o, e = HttpGetResponseBody()
for n, v in ipairs(a) do
body = json.decode(v)
results_code = tonumber(body["code"])
if results_code == success_code then
results = "success"
else
results = "failed"
end
end
http_table["results"] = results
http_table["results_code"] = results_code

-- email & password
a, o, e = HttpGetRequestBody()
for n, v in ipairs(a) do
res = formatStr(v)
mail = urlDecode(res['email'])
pass = md5Encode(res['password'])
end
http_table["email"] = mail
http_table["password"] = pass

-- RequestHeaders
rh = HttpGetRequestHeaders()
for k, v in pairs(rh) do
key = string.lower(k)

common_var = common_mapping_table[key]
if common_var ~= nil then
http_table[common_var] = v
end

request_var = request_mapping_table[key]
if request_var ~= nil then
http_table[request_var] = v
end
end

-- ResponseHeaders
rsh = HttpGetResponseHeaders()
for k, v in pairs(rsh) do
key = string.lower(k)

common_var = common_mapping_table[key]
if common_var ~= nil then
http_table[common_var] = v
end

response_var = response_mapping_table[key]
if response_var ~= nil then
http_table[response_var] = v
end
end

-- timestring = SCPacketTimeString() 2019-09-10T06:08:35.582449+0000
sec, usec = SCPacketTimestamp()
timestring = os.date("!%Y-%m-%dT%T", sec) .. "." .. usec .. "+0000"

ip_version, src_ip, dst_ip, protocol, src_port, dst_port = SCFlowTuple()

-- flow_id
id = SCFlowId()
flow_id = string.format("%.0f", id)

-- true_ip
true_client_ip = HttpGetRequestHeader("True-Client-IP")
if true_client_ip ~= nil then
src_ip = true_client_ip
end

-- table
raw_data = {
timestamp = timestring,
flow_id = flow_id,
src_ip = src_ip,
src_port = src_port,
dest_ip = dst_ip,
dest_port = dst_port,
event_name = event_name,
event_type = event_type,
app_type = app_type,
http = http_table,
proto = "TCP"
}

-- json encode
data = json.encode(raw_data)

file:write(data .. "\n")
file:flush()

http = http + 1
end


function deinit (args)
SCLogInfo ("HTTP transactions logged: " .. http);
file:close(file)
end

示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
{
"src_port": 34963,
"event_type": "lua",
"proto": "TCP",
"flow_id": "471693756052529",
"timestamp": "2019-10-07T13:51:01.560556+0000",
"event_name": "login_audit",
"dest_port": 3007,
"http": {
"response_content_length": "446",
"response_content_type": "application/json; charset=utf-8",
"accept_encoding": "gzip",
"accept": "*/*",
"server": "nginx",
"results_code": 0,
"vary": "Accept-Encoding",
"password": "420ce28ef813a2dc05e52a7e24eb0d5c",
"date": "Mon, 07 Oct 2019 13:51:01 GMT",
"request_content_type": "application/x-www-form-urlencoded; charset=UTF-8",
"accept_language": "en-US,en;q=0.9",
"url": "/login",
"x_requested_with": "XMLHttpRequest",
"x_forwarded_proto": "https",
"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36",
"origin": "https://www.canon.com",
"response_token": "4f5f386aec34e877ce63fce7ddd3f15f",
"pragma": "no-cache",
"connection": "close",
"status": 200,
"protocol": "HTTP/1.1",
"hostname": "www.canon.com",
"url_path": "/login",
"cache_control": "no-cache, max-age=0, no-store, must-revalidate",
"method": "POST",
"email": "admin@canon.com",
"request_token": "109464370570b23fa9768078ab1ac8a9",
"member_id": 123456,
"results": "success",
"request_content_length": "49"
},
"src_ip": "77.99.22.17",
"dest_ip": "1.1.1.1",
"app_type": "web"
}

Optional Dependencies

PF_RING

参考:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ sudo git clone https://github.com/ntop/PF_RING.git
$ cd PF_RING/userland/lib
$ ./configure
$ sudo make install

$ cd ../libpcap
$ ./configure
$ sudo make install

$ cd ../tcpdump
$ ./configure
$ sudo make install

$ cd ../../kernel
$ make
$ sudo make install

$ modprobe pf_ring enable_tx_capture=0 min_num_slots=65536

Load pf_ring at boot

1
2
3
4
5
$ echo 'pf_ring' >> /etc/modules
$ sudo reboot

root@ubuntu:~# lsmod | grep pf_ring
pf_ring 1245184 0

GeoLocation

参考:

1
2
3
4
5
$ sudo apt-get install libmaxminddb-dev
$ wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
$ tar zxf GeoLite2-City.tar.gz
$ mkdir -p /usr/share/GeoIP
$ mv GeoLite2-City_20190702/GeoLite2-City.mmdb /usr/share/GeoIP/

Testing

如果未找到任何内容或未设置mmdb_dir,则Zeek按以下顺序查找位置数据库文件:

1
$ zeek -e "print lookup_location(8.8.8.8);"
  • /usr/share/GeoIP/GeoLite2-City.mmdb
  • /var/lib/GeoIP/GeoLite2-City.mmdb
  • /usr/local/share/GeoIP/GeoLite2-City.mmdb
  • /usr/local/var/GeoIP/GeoLite2-City.mmdb
  • /usr/share/GeoIP/GeoLite2-Country.mmdb
  • /var/lib/GeoIP/GeoLite2-Country.mmdb
  • /usr/local/share/GeoIP/GeoLite2-Country.mmdb
  • /usr/local/var/GeoIP/GeoLite2-Country.mmdb

如果出现 “Zeek was not configured for GeoIP support”, 源码安装时需要指定./configure --with-geoip=/usr/share/GeoIP


Gperftools

参考:

1
2
3
4
5
6
$ sudo git clone https://github.com/gperftools/gperftools.git
$ sudo apt-get install libunwind-dev autoconf automake libtool
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install

IPsumdump

参考:

1
2
3
4
5
6
7
$ curl -O http://www.read.seas.harvard.edu/~kohler/ipsumdump/ipsumdump-1.86.tar.gz
$ tar -xzf ipsumdump-1.86.tar.gz
$ cd ipsumdump-1.86
$ ./configure --prefix=/usr/
$ make
$ sudo make install
$ sudo make clean

Krb5

参考:

1
$ sudo apt-get install krb5-user

Jemalloc

参考:

1
2
3
4
5
$ sudo git clone https://github.com/jemalloc/jemalloc.git
$ ./autogen.sh
$ make -j2
$ sudo make install
$ sudo ldconfig

Zeek

Required Dependencies

1
2
# Ubuntu
$ sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev

If your system uses Python 2.7, then you will also need to install the python-ipaddres package.

1
$ sudo apt-get install python-ipaddress

参考:

1
2
3
4
$ git clone --recursive https://github.com/zeek/zeek
$ ./configure --with-pcap=/usr/local --with-geoip=/usr/share/GeoIP --enable-jemalloc --enable-perftools
$ make
$ sudo make install

确保Zeek正确链接到所需的libpcap库:

1
2
$ ldd /usr/local/zeek/bin/zeek | grep pcap
libpcap.so.1 => /usr/local/lib/libpcap.so.1 (0x00007fd5b3cfc000)
Add PATH
1
2
3
4
5
$ export PATH=/usr/local/zeek/bin:$PATH
$ source ~/.bashrc

$ sudo vim ~/.bashrc
export PATH=/usr/local/zeek/bin:$PATH

Using PF_RING
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
$ vim /usr/local/zeek/etc/node.cfg

# Example ZeekControl node configuration.
#
# This example has a standalone node ready to go except for possibly changing
# the sniffing interface.

# This is a complete standalone configuration. Most likely you will
# only need to change the interface.
# [zeek]
# type=standalone
# host=localhost
# interface=ens33

## Below is an example clustered configuration. If you use this,
## remove the [zeek] node above.

#[logger]
#type=logger
#host=localhost
#
[manager]
type=manager
host=localhost
#
[proxy-1]
type=proxy
host=localhost
#
[worker-1]
type=worker
host=localhost
interface=ens33
lb_method=pf_ring
lb_procs=35
pin_cpus=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34
#
#[worker-2]
#type=worker
#host=localhost
#interface=eth0
Enable Json
1
2
3
4
5
6
7
$ vim /usr/local/zeek/share/zeek/site/local.zeek

# Uncomment the following line to enable logging of link-layer addresses. Enabling
# this adds the link-layer address for each connection endpoint to the conn.log file.
# @load policy/protocols/conn/mac-logging
#
@load policy/tuning/json-logs.zeek

压测

  1. 查看是否丢包
1
2
3
4
5
$ zeekctl netstats; date
worker-1-1: 1585999488.317958 recvd=18 dropped=0 link=18
worker-1-2: 1585999488.335990 recvd=0 dropped=0 link=0
worker-1-3: 1585999488.345568 recvd=2 dropped=0 link=2
Sat Apr 4 04:24:48 PDT 2020
  1. 查看资源占用
1
2
3
4
5
6
7
8
$ zeekctl top; date
Name Type Host Pid VSize Rss Cpu Cmd
manager manager localhost 4818 1G 87M 0% zeek
proxy-1 proxy localhost 4870 649M 85M 0% zeek
worker-1-1 worker localhost 4948 655M 92M 0% zeek
worker-1-2 worker localhost 4952 655M 92M 0% zeek
worker-1-3 worker localhost 4954 655M 92M 0% zeek
Sat Apr 4 04:24:52 PDT 2020
  1. 查看流量大小(10秒)
1
2
3
4
5
$ zeekctl capstats; date
Interface kpps mbps (10s average)
----------------------------------------
localhost/ens5 37.8 375.2
Fri Aug 16 16:12:47 UTC 2019

参考

安装

PF_RING

1. pf_ring

  • 二进制安装
1
2
3
4
5
6
7
8
9
# 18.04 LTS
$ apt-get install software-properties-common wget
$ add-apt-repository universe [ unless you have done it previously ]
$ wget http://apt-stable.ntop.org/18.04/all/apt-ntop-stable.deb
$ apt install ./apt-ntop-stable.deb

$ apt-get clean all
$ apt-get update
$ apt-get install pfring

  • 编译安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 依赖
$ apt install git make gcc libelf-dev build-essential subversion flex libnuma-dev bison pkg-config libtool rustc cargo libjansson-dev ethtool autoconf autogen liblzma-dev libpcre3-dev libyaml-dev libpcap-dev zlib1g-dev

# 下载
$ git clone https://github.com/ntop/PF_RING.git

# 安装
$ cd PF_RING/kernel
$ make
$ sudo make install

# 加载 pf_ring.ko
$ sudo insmod pf_ring.ko min_num_slots=65536 transparent_mode=2 enable_tx_capture=0

# 查看 pf_ring 信息
$ cat /proc/net/pf_ring/info
PF_RING Version : 7.5.0 (dev:14f62e0edb2b54cd614ab9d1f6467ccb8c6c9c32)
Total rings : 0

Standard (non ZC) Options
Ring slots : 65536
Slot version : 17
Capture TX : No [RX only]
IP Defragment : No
Socket Mode : Standard
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0

# 卸载 pf_ring.ko
$ sudo rmmod pf_ring

2. libpfring、libpcap

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 安装
$ cd PF_RING/userland/lib
$ ./configure && make
$ sudo make install
$ cd ../libpcap
$ ./configure && make
$ sudo make install

# 验证
$ cd PF_RING/userland/examples
$ make

# 接收数据包
root@ubuntu:/opt/PF_RING/userland/examples# ./pfcount -i ens33
Using PF_RING v.7.5.0
Capturing from ens33 [mac: 00:0C:29:D5:B9:8F][if_index: 2][speed: 0Mb/s]
# Device RX channels: 1
# Polling threads: 1
Dumping statistics on /proc/net/pf_ring/stats/51441-ens33.3
=========================
Absolute Stats: [2 pkts total][0 pkts dropped][0.0% dropped]
[2 pkts rcvd][424 bytes rcvd]
=========================

# 发送数据包
root@ubuntu:/opt/PF_RING/userland/examples# sudo ./pfsend -f 64byte_packets.pcap -n 0 -i ens33 -r 5
Sending packets on ens33
Using PF_RING v.7.5.0
Estimated CPU freq: 2429795000 Hz
Unable to open file 64byte_packets.pcap

3. tcpdump

1
2
3
4
# 安装
$ cd PF_RING/userland/tcpdump/
$ ./configure && make
$ sudo make install

参考:

LuaJIT

1
2
3
4
5
6
7
8
9
10
11
12
13
# 安装
$ wget http://luajit.org/download/LuaJIT-2.0.5.tar.gz
$ tar -zxf LuaJIT-2.0.5.tar.gz
$ cd LuaJIT-2.0.5/
$ make && make install
$ ldconfig

# 验证
$ ldconfig -p | grep lua
liblua5.1.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/liblua5.1.so.0
liblua5.1.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/liblua5.1.so
liblua5.1-c++.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/liblua5.1-c++.so.0
liblua5.1-c++.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/liblua5.1-c++.so

Hyperscan

1. boost

1
2
3
4
5
6
7
8
9
10
11
12
# 依赖
$ apt install cmake

# 下载
$ wget https://dl.bintray.com/boostorg/release/1.69.0/source/boost_1_69_0.tar.gz

# 安装
$ tar -xvf boost_1_69_0.tar.gz
$ cd boost_1_69_0/
$ ./bootstrap.sh
$ sudo ./b2 --with-iostreams --with-random install
$ ldconfig

2. ragle

1
2
3
4
5
6
7
8
9
10
11
12
13
# 依赖
$ sudo apt-get install autoconf

# 下载
$ wget http://www.colm.net/files/ragel/ragel-6.10.tar.gz

# 安装
$ tar zxvf ragel-6.10.tar.gz
$ cd ragel-6.10
$ ./configure
$ make
$ sudo make install
$ ldconfig

3. hyperscan

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 依赖
$ sudo apt install libpcap-dev

# 下载
$ wget https://github.com/intel/hyperscan/archive/v5.1.0.tar.gz -O hyperscan-5.1.0.tar.gz

# 安装
$ tar -zxvf hyperscan-5.1.0.tar.gz
$ cd hyperscan-5.1.0
$ mkdir cmake-build
$ cd cmake-build
$ cmake -DBUILD_SHARED_LIBS=on -DCMAKE_BUILD_TYPE=Release ..
$ make -j8
$ sudo make install
$ ldconfig

4. 验证

1
2
3
4
5
$ ldconfig -p | grep hs
libhs_runtime.so.5 (libc6,x86-64) => /usr/local/lib/libhs_runtime.so.5
libhs_runtime.so (libc6,x86-64) => /usr/local/lib/libhs_runtime.so
libhs.so.5 (libc6,x86-64) => /usr/local/lib/libhs.so.5
libhs.so (libc6,x86-64) => /usr/local/lib/libhs.so

参考:

Suricata

依赖

1
2
3
4
5
6
$ apt-get install libpcre3 libpcre3-dbg libpcre3-dev build-essential libpcap-dev   \
libnet1-dev libyaml-0-2 libyaml-dev pkg-config zlib1g zlib1g-dev \
libcap-ng-dev libcap-ng0 make libmagic-dev libjansson-dev \
libnss3-dev libgeoip-dev libhiredis-dev libevent-dev \
python-yaml rustc cargo libmaxminddb-dev liblzma-dev \
python3-distutils liblz4-dev

安装方式: 2选1

release

1
2
3
4
5
6
$ wget https://www.openinfosecfoundation.org/download/suricata-5.0.2.tar.gz
$ tar zxvf suricata-5.0.2.tar.gz
$ cd suricata
$ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --enable-pfring --with-libpfring-includes=/usr/local/include --with-libpfring-libraries=/usr/local/lib --enable-geoip --enable-luajit --with-libluajit-includes=/usr/local/include/luajit-2.0/ --with-libluajit-libraries=/usr/local/lib/ --with-libhs-includes=/usr/local/include/hs/ --with-libhs-libraries=/usr/local/lib/
$ make && make install-full
$ ldconfig

git clone

1
2
3
4
5
6
7
8
9
$ mkdir suricata
$ cd suricata
$ git clone git://phalanx.openinfosecfoundation.org/oisf.git
$ cd oisf
$ git clone https://github.com/OISF/libhtp.git
$ ./autogen.sh
$ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --enable-pfring --with-libpfring-includes=/usr/local/include --with-libpfring-libraries=/usr/local/lib --enable-geoip --enable-luajit --with-libluajit-includes=/usr/local/include/luajit-2.0/ --with-libluajit-libraries=/usr/local/lib/ --with-libhs-includes=/usr/local/include/hs/ --with-libhs-libraries=/usr/local/lib/
$ make && make install-full
$ ldconfig

验证

1. PF_RING

1
2
3
$ suricata --build-info | grep PF_RING
Features: PCAP_SET_BUFF PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUA HAVE_LUAJIT HAVE_LIBJANSSON PROFILING TLS MAGIC RUST
PF_RING support: yes

2. LuaJit

1
2
3
$ suricata --build-info | grep lua
LUA support: yes, through luajit
libluajit: yes

3. Hyperscan

1
2
$ suricata --build-info | grep Hyperscan
Hyperscan support: yes

启动

1
$ suricata --pfring-int=ens6 --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow -c /etc/suricata/suricata.yaml

规则管理

1
$ pip install --upgrade suricata-update

定时更新

1
2
$ crontab -l
10 0 * * * /usr/bin/suricata-update --no-test && /usr/bin/suricatasc -c reload-rules

优化

https://www.jianshu.com/p/9348e211a6a2

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
$ hexo new "My New Post"

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment