iWIN 必須被解散!

Weil Jimmer's BlogWeil Jimmer's Blog


Category:Product

Found 12 records. At Page 2 / 3.

SoundCloud Downloader EXE

No Comments
-
更新於 2017-05-19 18:44:58

本程式可批量下載 soundcloud.com 的音樂,下載該名藝術家的所有音樂並編號,或是下載某播放清單的所有音樂,或是下載單首音樂。

The Application is able to mutiple download the music from "soundcloud.com" ,  you can also download all the song from the artists you love , and you can download specific playlist or tracks.

2017.02.15 - 修復Bug - 無法下載問題。

版本(Version):1.0.0.1

Last update:2017.02.16

下載地址【一】:https://url.weils.net/l

下載地址【二】:https://url.weils.net/q

產品頁面:http://web.wbftw.org/product/soundclouddownloaderexe


This entry was posted in General, Software, Free, The Internet, Product, Tools By Weil Jimmer.

Python 進階下載器

No Comments
-
更新於 2015-11-20 18:47:40

我會寫這個,一來是為了讓朋友羨慕也想學程式,二來是我自己要用。我不會閒閒開發一個我用不到的東西。總是開發者要支持一下自己的產品嘛。

這是用Python3寫成的程式,主要針對手機而使用。需要安裝BeautifulSoup4。

(在Windows下的命令提示字元顯示很醜,沒有顏色,實際上在手機Linux終端機裡面跑的時候,會有顏色。)

主要支援以下功能:

一、抓取目標URL的目標連結

例如我要抓取IG某個頁面的所有圖片。

二、載入網頁列表,抓取目標連結

例如:載入某網站相簿的第一頁,抓取圖片,然後載入第二頁,抓取圖片,載入第三頁……以此類推。

三、規律網址抓取,這個算是最低階的方法吧。

例如:下載http://example.com/1.jpg,下載http://example.com/2.jpg,下載http://example.com/3.jpg,下載/4.jpg下載/5.jpg……

四、顯示目標清單

五、下載清單上的連結

至於抓圖功能,我可以稱進階抓圖器是沒有講假的,雖然還比不上我用VB.NET寫出來的 強大。那種仿一般正常用戶框架又有COOKIE、HEADER、還解析JS,Python很難辦得到。

所以,頂多次級一點。

支援:

一、抓取頁面上所有「看起來是網址」的連結。(即便它沒有被鑲入在任何標籤內)(採用正規表達式偵測)

二、抓取A標籤的屬性HREF。(超連結)

三、抓取IMG標籤的屬性SRC。(圖片)

四、抓取SOURCE標籤的屬性SRC。(HTML5的audio、movie)

五、抓取EMBED標籤的SRC屬性。(FLASH)

六、抓取OBJECT標籤的DATA屬性。(網頁插件)

七、LINK標籤的HREF屬性。(CSS)

八、SCRIPT標籤的SRC屬性。(JS)

九、FRAME標籤的SRC屬性。(框架)

十、IFRAME標籤的SRC屬性。(內置框架)

十一、以上全部。

十二、自訂抓取標籤名稱與屬性名稱。(這個我VB板的進階抓圖器沒有這項功能)

支援 過濾關鍵字,包刮AND、OR邏輯閘,一定要全部包刮關鍵字,或是命中其一關鍵字。

規律網址下載則支援,起始數字、終止數字、每次遞增多少、補位多少。

※這個有相對位置的處理。

****************************************

* 名稱:進階下載器

* 團隊:White Birch Forum Team

* 作者:Weil Jimmer

* 網站:http://0000.twgogo.org/

* 時間:2015.09.26

****************************************

Source Code

# coding: utf-8
"""Weil Jimmer For Safe Test Only"""
import os,urllib.request,shutil,sys,re
from threading import Thread
from time import sleep
from sys import platform as _platform

GRAY = "\033[1;30m"
RED = "\033[1;31m"
LIME = "\033[1;32m"
YELLOW = "\033[1;33m"
BLUE = "\033[1;34m"
MAGENTA = "\033[1;35m"
CYAN = "\033[1;36m"
WHITE = "\033[1;37m"
BGRAY = "\033[1;47m"
BRED = "\033[1;41m"
BLIME = "\033[1;42m"
BYELLOW = "\033[1;43m"
BBLUE = "\033[1;44m"
BMAGENTA = "\033[1;45m"
BCYAN = "\033[1;46m"
BDARK_RED = "\033[1;48m"
UNDERLINE = "\033[4m"
END = "\033[0m"

if _platform.find("linux")<0:
	GRAY = ""
	RED = ""
	LIME = ""
	YELLOW = ""
	BLUE = ""
	MAGENTA = ""
	CYAN = ""
	WHITE = ""
	BGRAY = ""
	BRED = ""
	BLIME = ""
	BYELLOW = ""
	BBLUE = ""
	BMAGENTA = ""
	BCYAN = ""
	UNDERLINE = ""
	END = ""
	os.system("color e")

try:
    import pip
except:
	print(RED + "錯誤沒有安裝pip!" + END)
	input()
	exit()

try:
    from bs4 import BeautifulSoup
except:
	print(RED + "錯誤沒有安裝bs4!嘗試安裝中...!" + END)
	pip.main(["install","beautifulsoup4"])
	from bs4 import BeautifulSoup

global phone_
phone_ = False

try:
	import android
	droid = android.Android()
	phone_ = True
except:
	try:
		import clipboard
	except:
		print(RED + "錯誤沒有安裝clipboard!嘗試安裝中...!" + END)
		pip.main(["install","PyGTK"])
		pip.main(["install","clipboard"])
		import clipboard

def get_clipboard():
	global phone_
	if phone_==True:
		return str(droid.getClipboard().result)
	else:
		return clipboard.paste()

global target_url
target_url = [[],[],[],[],[],[],[],[],[]]

def __init__(self):
	print("")

print (RED)
print ("*" * 40)
print ("*  Name:\tWeil_Advanced_Downloader")
print ("*  Team:" + LIME + "\tWhite Birch Forum Team" + RED)
print ("*  Developer:\tWeil Jimmer")
print ("*  Website:\thttp://0000.twgogo.org/")
print ("*  Date:\t2015.10.09")
print ("*" * 40)
print (END)

root_dir = "/sdcard/"
print("根目錄:" + root_dir)
global save_temp_dir
global save_dir
save_dir=str(input("存檔資料夾:"))
save_temp_dir=str(input("暫存檔資料夾(會自動刪除):"))

global target_array_index
target_array_index = 0

def int_s(k):
	try:
		return int(k)
	except:
		return -1

def reporthook(blocknum, blocksize, totalsize):
	readsofar = blocknum * blocksize
	if totalsize > 0:
		percent = readsofar * 1e2 / totalsize
		s = "\r%5.1f%% %*d / %d bytes" % (percent, len(str(totalsize)), readsofar, totalsize)
		sys.stderr.write(s)
		if readsofar >= totalsize:
			sys.stderr.write("\r" + MAGENTA + "%5.1f%% %*d / %d bytes" % (100, len(str(totalsize)), totalsize, totalsize))
	else:
		sys.stderr.write("\r未知檔案大小…下載中…" + str(readsofar) + " bytes")
		#sys.stderr.write("read %d\n" % (readsofar,))

def url_encode(url_):
	if url_.startswith("http://"):
		return 'http://' + urllib.parse.quote(url_[7:])
	elif url_.startswith("https://"):
		return 'https://' + urllib.parse.quote(url_[8:])
	elif url_.startswith("ftp://"):
		return 'ftp://' + urllib.parse.quote(url_[6:])
	elif ((not url_.startswith("ftp://")) and (not url_.startswith("http"))):
		return 'http://' + urllib.parse.quote(url_)
	return url_

def url_correct(url_):
	if ((not url_.startswith("ftp://")) and (not url_.startswith("http"))):
		return 'http://' + (url_)
	return url_

def download_URL(url,dir_name,ix,total,encode,return_yes_no):
	global save_temp_dir
	prog_str = "(" + str(ix) + "/" + str(total) + ")"
	if (total==0):
		prog_str=""
	file_name = url.split('/')[-1]
	file_name=file_name.replace(":","").replace("*","").replace('"',"").replace("\\","").replace("|","").replace("?","").replace("<","").replace(">","")
	if file_name=="":
		file_name="NULL"
	try:
		print(YELLOW + "下載中…" + prog_str + "\n" + url + "\n" + END)
		if not os.path.exists(root_dir + dir_name + "/") :
			os.makedirs(root_dir + dir_name + "/")
		opener = urllib.request.FancyURLopener({})
		opener.version = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36'
		opener.addheader("Referer", url)
		opener.addheader("X-Forwarded-For", "0.0.0.0")
		opener.addheader("Client-IP", "0.0.0.0")
		local_file,response_header=opener.retrieve(url_encode(url), root_dir + dir_name + "/" + str(ix) + "-" + file_name, reporthook)
		print(MAGENTA + "下載完成" + prog_str + "!" + END)
	except urllib.error.HTTPError as ex:
		print(RED + "下載失敗" + prog_str + "!" + str(ex.code) + END)
	except:
		print(RED + "下載失敗" + prog_str + "!未知錯誤!" + END)
	if return_yes_no==0:
		return ""
	try:
		k=open(local_file,encoding=encode).read()
	except:
		k="ERROR"
		print(RED + "讀取失敗!" + END)
	try:
		if dir_name==save_temp_dir:
			shutil.rmtree(root_dir + save_temp_dir + "/")
	except:
		print(RED + "刪除暫存資料夾失敗!" + END)
	return k

def check_in_filter(url_array,and_or,keyword_str):
	if keyword_str=="":
		return url_array
	url_filter_array = []
	s = keyword_str.split(',')
	for array_x in url_array:
		ok = True
		for keyword_ in s:
			if str(array_x).find(keyword_)>=0:
				if and_or==0:
					url_filter_array.append(array_x)
					ok=False
					break
			else:
				if and_or==1:
					ok=False
					break
		if ok==True:
			url_filter_array.append(array_x)
	return url_filter_array

def handle_relative_url(handle_url,ori_url):
	handle_url=str(handle_url)
	if handle_url=="":
		return ori_url
	if handle_url.startswith("?"):
		temp_form_url = ori_url
		search_A = ori_url.find("?")
		if search_A<0:
			return ori_url + handle_url
		else:
			return ori_url[0:search_A] + handle_url
	if handle_url.startswith("//"):
		return "http:" + handle_url
	if (handle_url.startswith("http://") or handle_url.startswith("https://") or handle_url.startswith("ftp://")):
		return handle_url
	root_url = ori_url
	search_ = root_url.find("//")
	if search_<0:
		return handle_url
	search_x = root_url.find("/", search_+2);
	if (search_x<0):
		root_url = ori_url
	else:
		root_url = ori_url[0:search_x]
	same_dir_url = ori_url[search_+2:]
	search_x2 = same_dir_url.rfind("/")
	if search_x2<0:
		same_dir_url = ori_url
	else:
		same_dir_url = ori_url[0:search_x2+search_+2]
	if handle_url.startswith("/"):
		return (root_url + handle_url)
	if handle_url.startswith("./"):
		return (same_dir_url + handle_url[1:])
	return (same_dir_url + "/" + handle_url)

def remove_duplicates(values):
	output = []
	seen = set()
	for value in values:
		if value not in seen:
			output.append(value)
			seen.add(value)
	return output

def get_text_url(file_content):
	url_return_array = re.findall('(http|https|ftp)://([\w+?\.\w+])+([a-zA-Z0-9\~\!\@\#\$\%\^\&amp;\*\(\)_\-\=\+\\\/\?\.\:\;\'\,]*)?', file_content)
	return url_return_array

def get_url_by_tagname_attribute(file_content,tagname,attribute,url_):
	soup = BeautifulSoup(file_content,'html.parser')
	url_return_array = []
	for link in soup.find_all(tagname):
		if link.get(attribute)!=None:
			url_return_array.append(handle_relative_url(link.get(attribute),url_))
	return url_return_array

def get_url_by_targetid_attribute(file_content,tagname,attribute,url_):
	soup = BeautifulSoup(file_content,'html.parser')
	url_return_array = []
	for link in soup.find_all(id=tagname):
		if link.get(attribute)!=None:
			url_return_array.append(handle_relative_url(link.get(attribute),url_))
	return url_return_array

def get_url_by_targetname_attribute(file_content,tagname,attribute,url_):
	soup = BeautifulSoup(file_content,'html.parser')
	url_return_array = []
	for link in soup.find_all(name=tagname):
		if link.get(attribute)!=None:
			url_return_array.append(handle_relative_url(link.get(attribute),url_))
	return url_return_array

def run_functional_get_url(way_X,html_code,target_array_index,and_or,keywords,ctagename,cattribute):
	global target_url
	if (way_X==1):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_text_url(html_code)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==2):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"a","href",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==3):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"img","src",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==4):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"source","src",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==5):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"embed","src",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==6):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"object","data",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==7):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"link","href",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==8):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"script","src",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==9):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"frame","src",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==10):
		ori_size=len(target_url[target_array_index])
		get_array_ = get_url_by_tagname_attribute(html_code,"iframe","src",temp_url)
		target_url[target_array_index].extend(get_array_)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==11):
		ori_size=len(target_url[target_array_index])
		get_array_1 = get_text_url(html_code)
		get_array_2 = get_url_by_tagname_attribute(html_code,"a","href",temp_url)
		get_array_3 = get_url_by_tagname_attribute(html_code,"img","src",temp_url)
		get_array_4 = get_url_by_tagname_attribute(html_code,"source","src",temp_url)
		get_array_5 = get_url_by_tagname_attribute(html_code,"embed","src",temp_url)
		get_array_6 = get_url_by_tagname_attribute(html_code,"object","data",temp_url)
		get_array_7 = get_url_by_tagname_attribute(html_code,"link","href",temp_url)
		get_array_8 = get_url_by_tagname_attribute(html_code,"script","src",temp_url)
		get_array_9 = get_url_by_tagname_attribute(html_code,"frame","src",temp_url)
		get_array_10 = get_url_by_tagname_attribute(html_code,"iframe","src",temp_url)
		target_url[target_array_index].extend(get_array_1)
		target_url[target_array_index].extend(get_array_2)
		target_url[target_array_index].extend(get_array_3)
		target_url[target_array_index].extend(get_array_4)
		target_url[target_array_index].extend(get_array_5)
		target_url[target_array_index].extend(get_array_6)
		target_url[target_array_index].extend(get_array_7)
		target_url[target_array_index].extend(get_array_8)
		target_url[target_array_index].extend(get_array_9)
		target_url[target_array_index].extend(get_array_10)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==12):
		ori_size=len(target_url[target_array_index])
		get_array_custom = get_url_by_tagname_attribute(html_code,ctagename,cattribute,temp_url)
		target_url[target_array_index].extend(get_array_custom)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==13):
		ori_size=len(target_url[target_array_index])
		get_array_custom = get_url_by_targetid_attribute(html_code,ctagename,cattribute,temp_url)
		target_url[target_array_index].extend(get_array_custom)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)
	elif(way_X==14):
		ori_size=len(target_url[target_array_index])
		get_array_custom = get_url_by_targetname_attribute(html_code,ctagename,cattribute,temp_url)
		target_url[target_array_index].extend(get_array_custom)
		target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
		target_url[target_array_index]=check_in_filter(target_url[target_array_index],and_or,keywords)
		print( LIME + "抓取完成!共抓取到:" + str(len(target_url[target_array_index])-ori_size) + "個URL" + END)

while True:	
	while True:
		method_X=int_s(input("\n\n" + CYAN + "要執行的動作:\n(1)抓取目標網頁資料\n(2)載入網址列表抓取資料\n(3)規律網址抓取\n(4)顯示目前清單\n(5)下載目標清單\n(6)清空目標清單\n(7)複製清單\n(8)刪除目標清單的特定值\n(9)從剪貼簿貼上網址(每個一行)" + END + "\n\n"))
		if method_X<=9 and method_X>=1:
			break
		else:
			print("輸入有誤!\n\n")
	if (method_X==1):
		while True:
			target_array_index=int_s(input("\n\n" + CYAN + "您要「存入」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
			if target_array_index<=8 and target_array_index>=1:
				break
			else:
				print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
		while True:
			way_X=int_s(input("\n\n" + CYAN + "抓取方式:\n(1)搜尋頁面所有純文字網址\n(2)抓取所有A標籤HREF屬性\n(3)抓取所有IMG標籤SRC屬性\n(4)抓取所有SOURCE標籤SRC屬性\n(5)抓取所有EMBED標籤SRC屬性\n(6)抓取所有OBJECT標籤DATA屬性\n(7)抓取所有LINK標籤HREF屬性\n(8)抓取所有SCRIPT標籤SRC屬性\n(9)抓取所有FRAME標籤SRC屬性\n(10)抓取所有IFRAME標籤SRC屬性\n(11)使用以上所有方法\n(12)自訂找尋標籤名及屬性名\n(13)自訂找尋ID及屬性名\n(14)自訂找尋Name及屬性名" + END + "\n\n"))
			if way_X<=14 and way_X>=1:
				break
		else:
			print("輸入有誤!")
		if way_X==12 or way_X==13 or way_X==14:	
			target_tagname_=str(input("\n目標標籤名稱/ID/Name:"))
			target_attribute_=str(input("\n目標屬性名稱:"))
		else:
			target_tagname_=""
			target_attribute_=""
		temp_url=url_correct(str(input("\n目標網頁URL:")))
		temp_url_code=str(input("\n目標網頁編碼(請輸入utf-8或big5或gbk…):"))
		keywords=str(input("\n" + CYAN + "請輸入過濾關鍵字(可留空,可多個,逗號為分隔符號):" + END + "\n\n"))
		and_or=0
		if keywords!="":
			while True:
				and_or=(-1)
				try:
					and_or=int_s(input("\n" + CYAN + "請輸入關鍵字邏輯閘:(1=and、0=or)" + END + "\n\n"))
				except:
					print("")
				if and_or==0 or and_or==1:
					break
				else:
					print("輸入有誤!\n")
		html_code=download_URL(temp_url,save_temp_dir,0,0,temp_url_code,1)
		if html_code=="ERROR":
			continue
		run_functional_get_url(way_X,html_code,target_array_index,and_or,keywords,target_tagname_,target_attribute_)
		input("\n\n完成!請輸入ENTER鍵跳出此功能...");
	elif(method_X==2):
		while True:
			RUN_array_index=int_s(input("\n\n" + CYAN + "您要「載入」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
			if RUN_array_index<=8 and RUN_array_index>=1:
				break
			else:
				print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
		while True:
			target_array_index=int_s(input("\n\n" + CYAN + "您要「存入」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
			if target_array_index<=8 and target_array_index>=1:
				break
			else:
				print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
		while True:
			way_X=int_s(input("\n\n" + CYAN + "抓取方式:\n(1)搜尋頁面所有純文字網址\n(2)抓取所有A標籤HREF屬性\n(3)抓取所有IMG標籤SRC屬性\n(4)抓取所有SOURCE標籤SRC屬性\n(5)抓取所有EMBED標籤SRC屬性\n(6)抓取所有OBJECT標籤DATA屬性\n(7)抓取所有LINK標籤HREF屬性\n(8)抓取所有SCRIPT標籤SRC屬性\n(9)抓取所有FRAME標籤SRC屬性\n(10)抓取所有IFRAME標籤SRC屬性\n(11)使用以上所有方法\n(12)自訂找尋標籤名及屬性名\n(13)自訂找尋ID及屬性名\n(14)自訂找尋Name及屬性名" + END + "\n\n"))
			if way_X<=14 and way_X>=1:
				break
		else:
			print("輸入有誤!")
		if way_X==12 or way_X==13 or way_X==14:
			target_tagname_=str(input("\n目標標籤名稱/ID/Name:"))
			target_attribute_=str(input("\n目標屬性名稱:"))
		else:
			target_tagname_=""
			target_attribute_=""
		keywords=str(input("\n" + CYAN + "請輸入過濾關鍵字(可留空,可多個,逗號為分隔符號):" + END + "\n\n"))
		while True:
			and_or=int_s(input("\n" + CYAN + "請輸入關鍵字邏輯閘:(1=and、0=or)" + END + "\n\n"))
			if and_or==0 or and_or==1:
				break
			else:
				print("輸入有誤!\n\n")
		temp_url_code=str(input("\n集合的網頁編碼(請輸入utf-8或big5或gbk…):"))
		for x in range(0,(len(target_url[RUN_array_index]))):
			html_code=download_URL(str(target_url[RUN_array_index][x]),save_temp_dir,(x+1),len(target_url[RUN_array_index]),temp_url_code,1)
			if html_code=="ERROR":
				continue
			run_functional_get_url(way_X,html_code,target_array_index,and_or,keywords,target_tagname_,target_attribute_)
		input("\n\n完成!請輸入ENTER鍵跳出此功能...");
	elif(method_X==3):
		start_number=int_s(input("起始點(數字):"))
		end_number=int_s(input("終止點(數字):"))
		step_ADD=int_s(input("每次遞增多少:"))
		str_padx=int_s(input("補滿位數至:"))
		if not os.path.exists('/sdcard/' + save_dir) :
			os.makedirs('/sdcard/' + save_dir )
		print(LIME + "※檔案將存在/sdcard/" + save_dir + "資料夾。" + END)
		while True:
			url=url_correct(input(LIME + "目標URL({i}是遞增數):" + END))
			if url.find("{i}")>=0:
				break
			else:
				print("網址未包含遞增數,請重新輸入網址。")
		for x in range(start_number,(end_number+1),step_ADD):
			download_URL(url.replace("{i}",str(x).zfill(str_padx)),save_dir,x,(end_number),"utf-8",0)
		input("\n\n完成!請輸入ENTER鍵跳出此功能...")
	elif(method_X==4):
		while True:
			RUN_array_index=int_s(input("\n\n" + CYAN + "您要「顯示」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
			if RUN_array_index<=8 and RUN_array_index>=1:
				break
			else:
				print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
		for x in range(0,(len(target_url[RUN_array_index]))):
			print("URL (" + str(x+1) + "/" + str(len(target_url[RUN_array_index])) + "):" + str(target_url[RUN_array_index][x]))
		input("\n\n完成!請輸入ENTER鍵跳出此功能...")
	elif(method_X==5):
		while True:
			RUN_array_index=int_s(input("\n\n" + CYAN + "您要「下載」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
			if RUN_array_index<=8 and RUN_array_index>=1:
				break
			else:
				print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
		for x in range(0,(len(target_url[RUN_array_index]))):
			download_URL(str(target_url[RUN_array_index][x]),save_dir,(x+1),len(target_url[RUN_array_index]),"utf-8",0)
		input("\n\n完成!請輸入ENTER鍵跳出此功能...")
	elif(method_X==6):
		ver = str(input("\n\n" + RED + "確定清空目標清單?(y/n)" + END + "\n\n"))
		if ver.lower()=="y":
			while True:
				RUN_array_index=int_s(input("\n\n" + CYAN + "您要「清空」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
				if RUN_array_index<=8 and RUN_array_index>=1:
					break
				else:
					print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
			target_url[RUN_array_index]=[]
			input("\n\n完成!請輸入ENTER鍵跳出此功能...")
	elif(method_X==7):
		ver = str(input("\n\n" + RED + "確定複製目標清單?(y/n)" + END + "\n\n"))
		if ver.lower()=="y":
			while True:
				RUN_array_index=int_s(input("\n\n" + CYAN + "您要「複製」的來源清單:(請輸入編號1~8)" + END + "\n\n"))
				if RUN_array_index<=8 and RUN_array_index>=1:
					break
				else:
					print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
			while True:
				target_array_index=int_s(input("\n\n" + CYAN + "您要「存入」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
				if target_array_index<=8 and target_array_index>=1:
					break
				else:
					print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
			target_url[target_array_index]=target_url[RUN_array_index]
			input("\n\n完成!請輸入ENTER鍵跳出此功能...")
	elif(method_X==8):
		ver = str(input("\n\n" + RED + "確定刪除目標清單特定值?(y/n)" + END + "\n\n"))
		if ver.lower()=="y":
			while True:
				RUN_array_index=int_s(input("\n\n" + CYAN + "您要「刪除值」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
				if RUN_array_index<=8 and RUN_array_index>=1:
					break
				else:
					print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
			if len(target_url[RUN_array_index])!=0:
				while True:
					target_array_index=int_s(input("\n\n" + CYAN + "您要「刪除的值編號」:(請輸入編號0~" + str(len(target_url[RUN_array_index])-1) + ")" + END + "\n\n"))
					if target_array_index>=0 and target_array_index<=(len(target_url[RUN_array_index])-1):
						break
					else:
						print("輸入有誤!請輸入0~" + str(len(target_url[RUN_array_index])-1) + "之間的號碼!")
				del target_url[RUN_array_index][target_array_index]
			else:
				print("空清單!無任何值!故無法刪除。")
			input("\n\n完成!請輸入ENTER鍵跳出此功能...")
	elif(method_X==9):
		ver = str(input("\n\n" + RED + "確定從剪貼簿貼上目標清單?(y/n)" + END + "\n\n"))
		if ver.lower()=="y":
			while True:
				target_array_index=int_s(input("\n\n" + CYAN + "您要「存入」的目標清單:(請輸入編號1~8)" + END + "\n\n"))
				if target_array_index<=8 and target_array_index>=1:
					break
				else:
					print("輸入有誤!只開放8個清單,請輸入1~8之間的號碼!")
			kk=get_clipboard()
			if kk=="" or kk==None:
				print(RED + "剪貼簿是空的!" + END)
			else:
				ori_size=len(target_url[target_array_index])
				target_url[target_array_index].extend(kk.split("\n"))
				target_url[target_array_index]=remove_duplicates(target_url[target_array_index])
				print(LIME + "已添加進去 " + str(len(target_url[target_array_index])-ori_size) + " 個不重複的URL。" + END)
			input("\n\n完成!請輸入ENTER鍵跳出此功能...")
input("\n\n請輸入ENTER鍵結束...")

 


This entry was posted in General, Experience, Free, Functions, Note, Product, Python By Weil Jimmer.

日記程式 下載

No Comments
-
更新於 2018-04-14 22:30:30

獨特的加密,「寫日記程式」,免費下載與使用。

這是我親自開發的程式之一,也是我目前覺得寫得還算正式的產品。看網路上很多日記程式都很假,我用過,他們那些都是「假藉」有密碼的前提,實際上,只不過是封鎖了要讀取的地方而已,我也可以用筆記本打開那些日記檔案,結果發現都沒有「加密」!

沒有加密,那開啟的密碼也只是虛設了!趕快來使用本團隊研發出來最安全的日記程式!

主要功能說明:

一、自訂文字顏色屬性、置中、置左、置右,可以插入圖片,還有插入附件檔案(連同檔案一塊加密存成一個日記檔)。

二、支持自訂背景音樂,包括音樂列表循環播放,音量控制,自動播放、順序調整之類的。

三、日記列表方便管理,具有搜尋功能,重新命名、編輯列表,刪除檔案,讀取檔案…等。

四、真正具有加密/解密效果的日記,不必擔心日記外洩。即便獲取原始檔案,依舊無法解密,解密需要"密碼"。(如果忘記就永遠解不出來了)

五、會詳細的紀錄目前的時間、目前日記的字數、插入了多少圖片、多少檔案(甚至可以計算所有日記加起來的字數、圖片數、檔案數)。

六、可自訂背景圖片,背景顏色,文字顏色、打字框顏色……的。

*註一:日記程式的背景圖片和背景音樂都不會經過加密,只是把圖片和音樂檔移動到資料夾內而已。

*註二:日記程式所創之密碼Hash檔已被加密過,即便解密,也只是一長串被加鹽運算一萬次的密文,幾乎無從還原。

下載地址【一】:https://url.weils.net/p

下載地址【二】:http://cht.tw/h/7i19i

2018/03/28 更新:可以嘗試使用本站開發的手機板日記程式。

https://my.pcloud.com/publink/show?code=kZwY2NZ5wWIfrfX3DQXWBx8DTN3GfTt90Y7#folder=1583580893


This entry was posted in Android, Software, Free, Product, Tools, VB.NET By Weil Jimmer.

新版表單攻擊程式

No Comments
-
更新於 2017-05-19 19:39:22

終於正式完成拉,其實正式完成很久了,只是都沒有發表。看別人寫一篇程式就發一篇文,那我也來這樣好了。

這是我第一個用C Sharp寫成的程式,沒想到剛入門就可以那麼熟,我覺得有點…不可思議,對我來說 .NET系列都是一樣的。當然我再也不碰 VB.NET,除非寫舊專案。

主要功能涵蓋三個:

一、表單灌水。

二、表單爆破。

三、登入表單抓取資料。

今天的更新呢,就是加入"破解需要數字運算的文字驗證碼"。

測試頁:

如上,類似左下角的那紅色數學運算文字驗證碼。

填入好資料就大概長這副模樣,就直接開始爆破密碼了!大概過了七分鐘後破解出來。

因為鄙人的Server有點爛,所以在爆破的時候CPU飆到100%沒有回應,有點卡頓。破解出密碼為:2a!。填入後,也登入成功。

第三個功能如下:

當然這麼沒有填東西進去,我沒有測試頁,不過,我是有實際測試過的喔!

本程式支援破解簡單文字驗證碼,包刮用Name抓取阿,用ID抓取,還是固定字數去抓取,都通。今天又加入了數字運算的功能。

支援破解類型包刮 A~Z、a~z、a~zA~Z、0~9、0~9a~z、0~9A~Z、0~9a~zA~Z符號、身份證字號、字典檔(同目錄下dic.txt、dic2.txt)。

這東西有點在法律邊緣了。

By Weil Jimmer


This entry was posted in C#, General, Functions, Product, Tools By Weil Jimmer.

檔案批量重命名工具

No Comments
-
更新於 2017-04-24 22:44:55

主要功能:

一、可過濾檔案名稱並以資料夾或檔案型式快速取得列表。

二、支援正規表示式語句或一般字串替換成新字串。

三、可複製移動檔案名稱於任意位置插入。

四、可插入/清除任意合法字串於任意位置。

五、可清除全、半形、中英文大小寫、數字、符號。

六、可插入檔案存取、變更、創立時間於任意位置,時間格式自訂。

七、可使檔案名稱數字化,目前支援:a-z、A-Z、a-zA-Z、A-Za-z、羅馬數字、國字(一、二…)、大寫國字(壹、貳…)、2~62進位法,可設定開始數字(可為負數)、與遞增數(可為負數),與支援補滿位數至任意位,本數字化系統支援大數快速運算,不會溢位!

八、可變更副檔名為英文大小寫、題題、反轉、附加設定、轉繁體、轉簡體……等,並可以設定不分開進行命名設置動作。

九、可插入目錄資料夾名稱級數,以,包含插入第一級,第二級…或第一級與第二級,第一級與第二級與第三級……目錄名稱,可以分隔符號分開。

十、包含Mp3文件頭的解析,可以在列表中依TrackNum、Title、Artist、Album、Genre、Year、Duration排序。

更新 - 2015.12.22:

為了Windows XP加入x86版本。

更新 - 2016.08.06:

修復重命名子資料夾下檔案的Bug所造成的錯誤。

更新 - 2017.04.24:

加入新功能,現在可以批量變更檔案名稱由GBK轉BIG。

最新版本:1.0.0.3

下載地址【一】64位元電腦:https://url.weils.net/o

下載地址【一】32位元電腦:https://url.weils.net/j

下載地址【二】64位元電腦:https://my.pcloud.com/publink/show?code=XZp1rAZRmC9fntTQSuzGI00G5mUjLvM4R5X

下載地址【二】32位元電腦:https://my.pcloud.com/publink/show?code=XZW1rAZ0xTdErds0h8HPvfp6pxmvYbzzbdk


This entry was posted in Product, Tools By Weil Jimmer.

上一頁  1 2 3 /3 頁)下一頁

Visitor Count

pop
nonenonenone

Note

支持網路中立性.
Support Net Neutrality.

飽暖思淫欲,饑寒起盜心。

支持臺灣實施無條件基本收入

歡迎前來本站。

Words Quiz


Search

Music

Blogging Journey

4257days

since our first blog post.

Republic Of China
The strong do what they can and the weak suffer what they must.

Privacy is your right and ability to be yourself and express yourself without the fear that someone is looking over your shoulder and that you might be punished for being yourself, whatever that may be.

It is quality rather than quantity that matters.

I WANT Internet Freedom.

Reality made most of people lost their childishness.

Justice,Freedom,Knowledge.

Without music life would be a mistake.

Support/Donate

This site also need a little money to maintain operations, not entirely without any cost in the Internet. Your donations will be the best support and power of the site.
MethodBitcoin Address
bitcoin1gtuwCjjVVrNUHPGvW6nsuWGxSwygUv4x
buymeacoffee
Register in linode via invitation link and stay active for three months.Linode

Support The Zeitgeist Movement

The Zeitgeist Movement

The Lie We Live

The Lie We Live

The Questions We Never Ask

The Questions We Never Ask

Man

Man

THE EMPLOYMENT

Man

In The Fall

In The Fall

Facebook is EATING the Internet

Facebook

Categories

Android (7)

Announcement (4)

Arduino (2)

Bash (2)

C (3)

C# (5)

C++ (1)

Experience (51)

Flash (2)

Free (13)

Functions (36)

Games (13)

General (58)

Git (1)

HTML (7)

Java (13)

JS (7)

Mood (24)

NAS (2)

Note (31)

Office (1)

OpenWrt (5)

PHP (9)

Privacy (4)

Product (12)

Python (4)

Software (11)

The Internet (24)

Tools (16)

VB.NET (8)

WebHosting (7)

Wi-Fi (5)

XML (4)