Python:脚本为您的利基搜索关键字提供Google Autosuggest趋势摘要

Python脚本捕获自动建议趋势

每个人都喜欢Google趋势,但是长尾关键字有点棘手。 我们都喜欢官方 谷歌趋势服务 以获得有关搜索行为的见解。 但是,有两点阻碍了许多人将其用于坚实的工作。

  1. 当您需要寻找时 新的利基关键字有 Google趋势中的数据不足 
  2. 缺少用于向Google趋势发出请求的官方API:当我们使用诸如 趋势,那么我们必须使用代理服务器,否则我们将被阻止。 

在本文中,我将分享我们编写的Python脚本,该脚本可通过Google Autosuggest导出趋势关键字。

随着时间的推移获取并存储自动建议的结果 

假设我们有1,000个Seed关键字要发送到Google Autosuggest。 作为回报,我们可能会得到大约200,000万 长尾巴 关键字。 然后,我们需要在一周后执行相同的操作,然后比较这些数据集来回答两个问题:

  • 哪些查询是 新关键字 与上次相比? 这可能是我们需要的情况。 Google认为这些查询变得越来越重要-通过这样做,我们可以创建自己的Google Autosuggest解决方案! 
  • 哪些查询是 关键字不再 趋势?

该脚本非常简单,我共享的大多数代码 点击此处。 更新的代码将保存过去运行的数据,并随时间比较建议。 为了简化起见,我们避免使用基于文件的数据库(如SQLite)-因此,所有数据存储都在下面使用CSV文件。 这使您可以在Excel中导入文件并探索适合您业务的利基关键字趋势。

利用此Python脚本

  1. 输入应发送到自动完成功能的种子关键字集:keyword.csv
  2. 根据需要调整脚本设置:
    • 语言:默认为“ en”
    • 国家/地区:默认为“我们”
  3. 安排脚本每周运行一次。 您也可以根据需要手动运行它。
  4. 使用keyword_suggestions.csv进行进一步分析:
    • 初见:这是查询在自动建议中首次出现的日期
    • 最后一次露面:最后一次查询的日期
    • 是新的:如果first_seen == last_seen我们将其设置为 –只需过滤此值即可在Google自动建议中获得新的趋势搜索。

这是Python代码

# Pemavor.com Autocomplete Trends
# Author: Stefan Neefischer (stefan.neefischer@gmail.com)
import concurrent.futures
from datetime import date
from datetime import datetime
import pandas as pd
import itertools
import requests
import string
import json
import time

charList = " " + string.ascii_lowercase + string.digits

def makeGoogleRequest(query):
    # If you make requests too quickly, you may be blocked by google 
    time.sleep(WAIT_TIME)
    URL="http://suggestqueries.google.com/complete/search"
    PARAMS = {"client":"opera",
            "hl":LANGUAGE,
            "q":query,
            "gl":COUNTRY}
    response = requests.get(URL, params=PARAMS)
    if response.status_code == 200:
        try:
            suggestedSearches = json.loads(response.content.decode('utf-8'))[1]
        except:
            suggestedSearches = json.loads(response.content.decode('latin-1'))[1]
        return suggestedSearches
    else:
        return "ERR"

def getGoogleSuggests(keyword):
    # err_count1 = 0
    queryList = [keyword + " " + char for char in charList]
    suggestions = []
    for query in queryList:
        suggestion = makeGoogleRequest(query)
        if suggestion != 'ERR':
            suggestions.append(suggestion)

    # Remove empty suggestions
    suggestions = set(itertools.chain(*suggestions))
    if "" in suggestions:
        suggestions.remove("")
    return suggestions

def autocomplete(csv_fileName):
    dateTimeObj = datetime.now().date()
    #read your csv file that contain keywords that you want to send to google autocomplete
    df = pd.read_csv(csv_fileName)
    keywords = df.iloc[:,0].tolist()
    resultList = []

    with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
        futuresGoogle = {executor.submit(getGoogleSuggests, keyword): keyword for keyword in keywords}

        for future in concurrent.futures.as_completed(futuresGoogle):
            key = futuresGoogle[future]
            for suggestion in future.result():
                resultList.append([key, suggestion])

    # Convert the results to a dataframe
    suggestion_new = pd.DataFrame(resultList, columns=['Keyword','Suggestion'])
    del resultList

    #if we have old results read them
    try:
        suggestion_df=pd.read_csv("keyword_suggestions.csv")
        
    except:
        suggestion_df=pd.DataFrame(columns=['first_seen','last_seen','Keyword','Suggestion'])
    
    suggestionCommon_list=[]
    suggestionNew_list=[]
    for keyword in suggestion_new["Keyword"].unique():
        new_df=suggestion_new[suggestion_new["Keyword"]==keyword]
        old_df=suggestion_df[suggestion_df["Keyword"]==keyword]
        newSuggestion=set(new_df["Suggestion"].to_list())
        oldSuggestion=set(old_df["Suggestion"].to_list())
        commonSuggestion=list(newSuggestion & oldSuggestion)
        new_Suggestion=list(newSuggestion - oldSuggestion)
         
        for suggest in commonSuggestion:
            suggestionCommon_list.append([dateTimeObj,keyword,suggest])
        for suggest in new_Suggestion:
            suggestionNew_list.append([dateTimeObj,dateTimeObj,keyword,suggest])
    
    #new keywords
    newSuggestion_df = pd.DataFrame(suggestionNew_list, columns=['first_seen','last_seen','Keyword','Suggestion'])
    #shared keywords with date update
    commonSuggestion_df = pd.DataFrame(suggestionCommon_list, columns=['last_seen','Keyword','Suggestion'])
    merge=pd.merge(suggestion_df, commonSuggestion_df, left_on=["Suggestion"], right_on=["Suggestion"], how='left')
    merge = merge.rename(columns={'last_seen_y': 'last_seen',"Keyword_x":"Keyword"})
    merge["last_seen"].fillna(merge["last_seen_x"], inplace=True)
    del merge["last_seen_x"]
    del merge["Keyword_y"]
    
    #merge old results with new results
    frames = [merge, newSuggestion_df]
    keywords_df =  pd.concat(frames, ignore_index=True, sort=False)
    # Save dataframe as a CSV file
    keywords_df['first_seen'] = pd.to_datetime(keywords_df['first_seen'])
    keywords_df = keywords_df.sort_values(by=['first_seen','Keyword'], ascending=[False,False])   
    keywords_df['first_seen']= pd.to_datetime(keywords_df['first_seen'])
    keywords_df['last_seen']= pd.to_datetime(keywords_df['last_seen'])
    keywords_df['is_new'] = (keywords_df['first_seen']== keywords_df['last_seen'])
    keywords_df=keywords_df[['first_seen','last_seen','Keyword','Suggestion','is_new']]
    keywords_df.to_csv('keyword_suggestions.csv', index=False)

# If you use more than 50 seed keywords you should slow down your requests - otherwise google is blocking the script
# If you have thousands of seed keywords use e.g. WAIT_TIME = 1 and MAX_WORKERS = 5
WAIT_TIME = 0.2
MAX_WORKERS = 20
# set the autocomplete language
LANGUAGE = "en"
# set the autocomplete country code - DE, US, TR, GR, etc..
COUNTRY="US"
# Keyword_seed csv file name. One column csv file.
#csv_fileName="keyword_seeds.csv"
CSV_FILE_NAME="keywords.csv"
autocomplete(CSV_FILE_NAME)
#The result will save in keyword_suggestions.csv csv file

下载Python脚本

你觉得呢?

本网站使用Akismet来减少垃圾邮件。 了解您的数据如何处理.