国内开启google位置记录功能/android版google maps 7+上,恢复位置记录功能在国内使用(需root)

android版google 地图在 7以后的版本上,位置记录功能在国内不能用了,提示本功能不能在中国使用。

至少对本人,“位置记录”功能是非常有用的功能,尤其是骑车出行时记录自己的路线。目前还没找到替代产品。之前一段时间内恢复回旧版本,以使用该功能。后来使用shlug用户组的高手kernel老兄提供的方案,成功启用了本功能。

方法如下:

让andorid系统启动后加载下面这段脚本:

脚本fixsim

欺骗android系统,让它以为运行在美国运营商的服务下。

加载该脚本需要手机有root权限。个人使用 “固件工具箱” (rom toolbox) 实现启动加载,功能位于:

rom toolbox - 工具 - 脚本编写器 , 导入如上启用脚本(推荐将脚本改下名字,如init.fixsim.sh),并设置成开机自动启动,完成后重启手机。

Screenshot_2013-10-22-23-01-06 Screenshot_2013-10-22-23-01-20

可能得过一阵子才能在google地图里启用“位置记录”,估计是系统缓存的问题。

个人测试环境:galaxy nexus 手机, android 4.3(已root), 固件工具箱5.9.0 付费版,上述脚本。

----------------------------

当然用其它办法让该脚本执行也可以,个人没有尝试。kernel老兄的方案是:将上述脚本“自己扔 /etc/init.d/下面,记得改权限,当然内核得支持执行/etc/init.d下面的脚本才行”

-----------------------------

另外:7以后版本的google地图上,有个很好用的缩放地图的方法,只要单指即可完美操作,比传统的缩放按钮更好用,比多指拉伸/缩小更方便:

回复:google与facebook的评述性文章

http://news.csdn.net/a/20110325/294648.html

 

yunpeng8800 2011-03-25 16:36:20

胡扯,社交网络和搜索有多大关系?谁会为了搜索一个问题,先去网站注册。总是一个替代一个,难道就没有共存吗,纯粹是赚稿费的文章

复评之如下

支持!
搜索是主动获得信息的有效途径,而社交网络则包含着更多的被动性。

---------------------------------------

有人回复:

gdufstww 2011-03-25 15:24:21

看来,IT这个产业,即容易诞生神话,也很容易让神话破灭。

复评之如下

这是表象,我们需要从中看到一点更深层次的东西。就像讲“天下大势合久必分分久必合”的样话的人,只做感性的慨叹,最多可以当诗人,但成不了思想家。

 

  • 这是表象,我们需要从中看到一点更深层次的东西。就像讲“天下大势合久必分分久必合”的样话的人,只做感性的慨叹,最多可以当诗人,但成不了思想家。

--------------------

wayto 2011-03-25 20:43:26

哈哈,看上面的评论就知道大多人对FACEBOOK的太不了解了,思维也是一条线状,对于FACEBOOK来说,GOOGLE只是一个应用,不过是其众多应用中的一个应用而已。

复评之如下

fengyqf 2011-03-27 00:04:21

是否也可以这样理解:对于google来说,facebook只是一个信息源,而且是半封闭的信息源?

User Agent里Mediapartners-Google是什么浏览器/客户端

在网站访问日志里,看到Mediapartners-Google,搜索一下,原来是Google Adsense的漫游器

Mediapartners-Google

Mediapartners-Google 抓取网页中的文字内容,用于 Google Adsense 分析关键词。只有投放了 Google Adsense 网页才会被 Mediapartners-Google 探测器爬取。

Adsbot-Google 抓取网页中的文字内容,用于为 Google AdWords 提供参考。只有 Google AdWords  目标网页才会被 Adsbot-Google 探测器爬取。

Mediapartners-Google与Googlebot并不相同,google Adsense与网页搜索使用两套不同的的蜘蛛机器人

Googlebot 一般称为 Google 机器人或 Google 探测器或Google蜘蛛。

Google会 派遣不同的 Googlebot 对网页内容进行获取。主要包括:

Googlebot 抓取网页中的文字内容。获取的内容保管于 Google 网页搜索和新闻搜索的数据库。一般谈的 Google 机器人主要指这个。

Googlebot-Mobile 抓取网页中的文字内容,用于 Google 手机搜索。

Googlebot-Image 抓取网页内的图片内容,保管入 Google 图片搜索数据库。

google的更新速度太快了,同时再骂一骂百度

google的更新速度太快了,昨天傍晚新购买了一个域名path8.net——个人不喜欢.com域名——晚上改换网站域名,计划原域名page99.net在一段过度期后作废。

具体什么时间作废,没法确定,因为这点是按google收录页面全部转到新域名的时间决定。也就是说,原域名没有存在价值后,就会直接删去。其它的搜索引擎,就不管了,尤其百度,收录量好像也不少,以site:page99.net查询,基本上所有页面都有收录。但排名好像不怎么高。排名不高倒也算了,更让人无法容忍的是,从百度搜索结果来的访问,几乎全是指向到首页的;指向文章内容页的几乎没有。众所周知的,几乎所有网站的首页,都是经常更新的,内容不稳定,指向到首页的搜索结果,实际上是没有意义的。但百度,还是这么搞,只能说它技术太差劲!

两个月前就想屏蔽百度蜘蛛,但没有真正做。下一步,再过两个月,如果百度还是这样,就直接屏蔽百度蜘蛛。

但百度贴吧、空间等里面的链接还是有必要手工修改更新一下。对网站监测数据表明,从百度空间里来的访问者,经常看好几页的,也就是这些人经常是高质量访客。

更重要是,百度空间带来的流量比百度搜索引擎带来的流量还要大!(当然都在10%以下)前面说过了,百度搜索引擎带来的质量太差劲,所以百度空间还是有必要维持一段时间的。
哈,扯了老半天,都是跑题了的!下面言归正传。

昨天晚上改换域名后,从google管理员工具里,提交域名更改,通知google,以加快收录索引的更新。

今天上午,就是刚才十分钟前,site:path8.net 结果共有24条记录,也就是说google管理员工具里的提交并没有生效,这里的收录的结果其实是google搜索蜘蛛发现新站点后,马上说加到搜索索引里的。

百度和Google的算法区别/Difference between Baidu and Google Algorithm

百度和Google算法区别:

1、 公正性:Google比百度更加公正,因为百度有很多人为的因素干涉排名,而Google的PR(Page Rank)算法使得Google更加公正

2、 网站评价等级:Google的PR(Page Rank)是Google算法中一个重要因素,它给网站进行了评分,而百度没有类似的算法机制。

3、 网页收录:Google和百度都可以对网站进行收录,而其都是采用site命令查询。Google对网站的收录比较客观真实,百度收录一般都要比Google少,而其会少很多。

4、 算法稳定性:Google的算法相对稳定,而百度的算法基本上“天天在变”

5、 Sitemap:Google的sitemap功能方便“蜘蛛”爬取网站页面,即使一些链接很深的页面也可以通过sitemap引导“蜘蛛”爬到该页面;而百度没有sitemap功能,百度“蜘蛛”只能依靠网站HTML页面上的链接爬取网站页面。
Difference between Baidu and Google Algorithm:

1.     Impartiality:

Google is more fair than Baidu. Because Baidu is affected by human factor, but PR algorithm makes Google more fair.
2.     Site evaluation grade:

Google's PR (Page Rank) is an important factor in Google algorithm, and it grades sites, while Baidu has no similar algorithm mechanism.
3.     Web pages record:

Both Google and Baidu can record web pages with the same “site” order to search. But web pages in Google are more objective, true and also much more than Baidu. For example: hsw.cn, Google includes 15,500,000,but Baidu only includes: 2,150,000.
4.     Algorithm stability:

Google's algorithm is relatively stable, while Baidu's algorithm essentially "changing every day"
5.     Sitemap:

Google's sitemap feature to facilitate the "spider" crawling Web pages, even if some link deep in pages can also be guided through the sitemap "spider" climb up on the page; while there is no sitemap feature for Baidu, Baidu "spider" can only rely on Web site link on the page web site pages get crawled.

google搜索设置,打造个性化的搜索:通过google英文版关闭google“安全搜索” (取消启用了安全搜索功能)

实在不懂google中国跑到香港为图个啥,.cn的搜索转到.hk后,搜索结果页反而直接:

搜索 gecko wine 获得约 456,000 条结果(启用了安全搜索功能),以下是第 1-10 条。 (用时 0.21 秒)

跑到香港搞和谐去了!

或许是原来.cn本来就是“和谐版”的,只是没有在页面上显示出来,大家不知道而已。

网上有人说什么是google是“逻辑的”,于是试图通过google的设置关闭“安全搜索”,在google中文版当然是失败了的,但在英文版里是可以的。

以前没有注意过之前google.com的设置setting里面每一项设置,也修改过“”(在新窗口打开结果页),最近有意比较google中文与英文搜索结果,差别还是有的,有时还很大。于是搜索东西时,会结合两个语言的搜索使用。而且英文版的也可以用中文显示页面,就在setting里设置,如下:

打开google英文(google.com),如果自动跳转到中文站点下的话,点中文首页最下面倒数第2行的

- Google.com in English

随便搜索一个关键词,比如输入

site:path8.net/tn

按回车——别说你不知道搜索引擎里的site:语法啊——

看到搜索结果页,该页最上面一行右边会有

web History | Settings | Sign in

三个链接,点“settings” (如果你已经登录gmail账号,settings将显示为菜单形式,菜单项里的search settings)

[是整个页面最上一行,比logo还靠上]

(1)

Interface Language

界面语言,默认为英文,可以设置成为 chinese simplifield,这样可能更容易阅读。当然也可以不设置,感觉更原汁原味一点。

2)

Search Language

搜索哪些语言的页面,默认当然还是english,建议选上

Chinese (Simplified)

Chinese (Traditional)

两项

3)

SafeSearch Filtering

Google's SafeSearch blocks web pages containing explicit sexual content from appearing in search results.

Lock SafeSearch This will apply strict filtering to all searches from this computer using Firefox. Learn more

看到了吧,所谓的安全搜索就这里,默认选的是第2项,中等安全,赶快改第3项吧

4)

Number of Results

每页显示结果条数。这个维持默认好了,当然你也可以改

5)

Results Window


结果打开窗口,新窗口打开搜索结果,选中吧;本朝网速太慢,这样可以让多个窗口同时慢慢打开着,我们先看其它,或打开更多窗口,然后再回来看之前打开的窗口是不是已经打开完毕。其实这也就是“多线程”的原理。

6)

下面还有几个选项,有兴趣的话,可以自己看看,酌情设置

......

9)

保存设置吧。
点页面右上角,或右下角的“Save Preferences”按钮,完成了。
这个设置是以cookie保存在本机里的,清空cookie后,还要重新设置的。

一个个性化的google搜索已经打造出来,

Advanced Search
有没有注意到搜索框下面下面一行的

,搜索英文关键词时,经常是几乎全部英文结果,可以选中这一项,再点search,就可以看到较多的中文结果了,还是比较方便的。

最后补充一点,其实不用搜索一个关键词,进到google英文首页就可以点右上角的settings进行设置了,ee~~~~

google工具栏居然没有chrome版的,简直是一个讽剌

在firefox里使用google工具栏,要占据一行,比较浪费空间,想在chrome里安装一个,需要时使用。

然而使用chrome进入google工具栏页面,点下载,居然出来了一个警示框:

Google工具栏要求Firefox为2.0版或更高版本。

您是否要下载最新版本的Firefox?

取消  确定

太令人无语了,自己出的浏览器,还不支持自己出的工具栏。

是在fefora12 linux 下的,不知道windows版的是不是也不支持

一点花边:

使用五笔输入法打google,出来的词是“恶业胃”

goog 是“恶业”   le是“胃” 很恶搞

关于google PR:PR0 - Google's PageRank 0 Penalty

http://pr.efactory.de/e-pr0.shtml

PR0 - Google's PageRank 0 Penalty

By the end of 2001, the Google search engine introduced a new kind of penalty for websites that use questionable search engine optimization tactics: A PageRank of 0. In search engine optimization forums it is called PR0 and this term shall also be used here. Characteristically for PR0 is that all or at least a lot of pages of a website show a PageRank of 0 in the Google Toolbar, even if they do have high quality inbound links. Those pages are not completely removed from the index but they are always at the end of search results and, thus, they are hardly to be found.

A PageRank of 0 does not always mean a penalty. Sometimes, websites which seam to be penalized simply lack inbound links with an sufficiently high PageRank. But if pages of a website which have formerly been placed well in search results, suddenly show the dreaded white PageRank bar, and if there have not been any substantial changes regarding the inbound links of that website, this means - according to the prevailing opinion - certainly a penalty by Google.

We can do nothing but speculate about the causes for PR0 because Google representatives rarely publish new information on Google's algorithms. But, non the less, we want to give a theoretical approach for the way PR0 may work because of its serious effects on search engine optimization.

The Background of PR0

Spam has always been one of the biggest problems that search engines had to deal with. When spam is detected by search engines, the usual proceeding is the banishment of those pages, websites, domains or even IP addresses from the index. But, removing websites manually from the index always means a large assignment of personnel. This causes costs and definitely runs contrary to Google's scalability goals. So, it appears to be necessary to filter spam automatically.

Filtering spam automatically carries the risk of penalizing innocent webmasters and, hence, the filters have to react rather sensibly on potential spam. But then, a lot of spam can pass the filters and some additional measures may be necessary. In order to filter spam effectively, it might be useful to take a look at links.

That Google uses link analysis in order to detect spam has been confirmed more or less clearly in WebmasterWorld's Google News Forum by a Google employee who posts as "GoogleGuy". Over and over again, he advises webmasters to avoid "linking to bad neighbourhoods". In the following, we want to specify the "linking to bad neighbourhoods" and, to become more precisely, we want to discuss how an identification of spam can be realized by the analysis of link structures. In particular, it shall be shown how entire networks of spam pages, which may even be located on a lot of different domains, can be detected.

BadRank as the Opposite of PageRank

The theoretical approach for PR0 as it is presented here was initially brought up by Raph Levien (www.advogato.org/person/raph). We want to introduce a technique that - just like PageRank - analyzes link structures, but, that unlike PageRank does not determine the general importance of a web page but rather measures its negative characteristics. For the sake of simplicity this technique shall be called "BadRank".

BadRank is in priciple based on "linking to bad neighbourhoods". If one page links to another page with a high BadRank, the first page gets a high BadRank itself through this link. The similarities to PageRank are obvious. The difference is that BadRank is not based on the evaluation of inbound links of a web page but on its outbound links. In this sense, BadRank represents a reversion of PageRank. In a direct adaptation of the PageRank algorithm, BadRank would be given by the following formula:

BR(A) = E(A) (1-d) + d (BR(T1)/C(T1) + ... + BR(Tn)/C(Tn))

where

BR(A) is the BadRank of page A,
BR(Ti) is the BadRank of pages Ti which are outbound links of page A,
C(Ti) is here the number of inbound links of page Ti and
d is the again necessary damping factor.

In the previously discussed modifications of the PageRank algorithm, E(A) represented the special evaluation of certain web pages. Regarding the BadRank algorithm, this value reflects if a page was detected by a spam filter or not. Without the value E(A), the BadRank algorithm would be useless because it was nothing but another analysis of link structures which would not take any further criteria into account.

By means of the BadRank algorithm, first of all, spam pages can be evaluated. A filter assigns a numeric value E(A) to them, which can, for example, be based on the degree of spamming or maybe even better on their PageRank. Thereby, again, the sum of all E(A) has to equal the total number of web pages. In the course of an iterative computation, BadRank is not only transfered to pages which link to spam pages. In fact, BadRank is able to identify regions of the web where spam tends to occur relatively often, just as PageRank identifies regions of the web which are of general importance.

Of course, BadRank and PageRank have significant differences, especially, because of using outbound and inbound links, respectively. Our example shows a simple, hierarchically structured website that reflects common link structures pretty well. Each page links to every page which is on a higher hierachical level and on its branch of the website's tree structure. Each page links to pages which are arranged hierarchically directly below them and, additionally, pages on the same branch and the same hierarchical level link to each other.

The following table shows the distribution of inbound and outbound links for the hierarchical levels of such a site.

Level inbound Links outbound Links
0 6 2
1 4 4
2 2 3

As to be expected, regarding inbound links, a hierarchical gradation from the index page downwards takes place. In contrast, we find the highest number of outbound links on the website's mid-level. We can see similar results, when we add another level of pages to our website while the above described linking rules stay the same.

Level inbound Links outbound Links
0 14 2
1 8 4
2 4 5
3 2 4

Again, there is a concentration of outbound links on the website's mid-level. But most of all, the outbound links are much more evenly distributed than the inbound links.

If we assign a value of 100 to the index page's E(A) in our original example, while all other values E equal 1 and if the damping factor d is 0.85, we get the following BadRank values:

Page BadRank
A 22.39
B/C 17.39
D/E/F/G 12.21

First of all, we see that the BadRank distributes from the index page among all other pages of the website. The combination of PageRank and BadRank will be discussed in detail below, but, no matter how the combination will be realized, it is obvious that both can neutralize each other very well. After all, we can assume that also the page's PageRank decreases, the lower the hierarchy level is, so that a PR0 can easily be achieved for all pages.

If we now assume that the hierarchically inferior page G links to a page X with a constant BadRank BR(X)=10, whereby the link from page G is the only inbound link for page X, and if all values E for our example website equal 1, we get, at a damping factor d of 0.85, the following values:

Page BadRank
A 4.82
B 7.50
C 14.50
D 4.22
E 4.22
F 11.22
G 17.18

In this case, we see that the distribution of the BadRank is less homogeneous than in the first scenario. Non the less, a distribution of BadRank among all pages of the website takes place. Indeed, the relatively low BadRank of the index page A is remarkable. It could be a problem to neutralize its PageRank which should be higher compared to the rest of the pages. This effect is not really desirable but it reflects the experiences of numerous webmasters. Quite often, we can see the phenomenom that all pages except for the index page of a website show a PR0 in the Google Toolbar, whereby the index page often has a Toolbar PageRank between 2 and 4. Therefore, we can probably assume that this special variant of PR0 is not caused by the detection of the according website by a spam filter, but the site rather received a penalty for "linking to bad neighbourhoods". Indeed, it is also possible that this variant of PR0 occurs when only hierarchical inferior pages of a website get trapped in a spam filter.

The Combination of PageRank and BadRank to PR0

If we assume that BadRank exists in the form presented here, there is now the question in which way BadRank and PageRank can be combined, in order to penalize as much spammers as possible while at the same time penalizing as few innocent webmasters as possible.

Intuitively, implementing BadRank directly in the actual PageRank computations seems to make sense. For instance, it is possible to calculate BadRank first and, then, divide a page's PageRank through its BadRank each time in the course of the iterative calculation of PageRank. This would have the advantage, that a page with a high BadRank could pass on just a little PageRank or none at all to the pages it links to. After all, one can argue that if one page links to a suspect page, all the other links on that page may also be suspect.

Indeed, such a direct connection between PageRank and BadRank is very risky. Most of all, the actual influence of BadRank on PageRank cannot be estimated in advance. It is to be considered that we would create a lot of pages which cannot pass on PageRank to the pages they link to. In fact, these pages are dangling links, and as it has been discussed in the section on outbound links, it is absolutely necessary to avoid dangling links while computing PageRank.

So, it would be advisable to have separate iterative calculations for PageRank and BadRank. Combining them afterwards can, for instance, be based on simple arithmetical operations. In principle, a subtraction would have the desirable consequence that relatively small BadRank values can hardly have a large influence on relatively high PageRank values. But, there would certainly be a problem to achieve PR0 for a large number of pages by using the subtraction. We would rather see a PageRank devaluation for many pages.

Achieving the effects that we know as PR0 seems easier to be realized by dividing PageRank through BadRank. But this would imply that BadRank receives an extremely high importance. However, since the average BadRank equals 1, a big part of BadRank values is smaller than 1 and, so, a normalization is necessary. Probably, normalizing and scaling BadRank to values between 0 and 1 so that "good" pages have values close to 1, and "bad" pages have values close to 0 and, subsequently, multiplying these values with PageRank would supply the best results.

A very effective and easy to realize alternative would probably be a simple stepped evaluation of PageRank and BadRank. It would be reasonable that if BadRank exceeds a certain value it will always lead to a PR0. The same could happen when the relation of PageRank to BadRank is below a certain value. Additionally, it would make sense that if BadRank and/or the relation of BadRank to PageRank is below a certain value, BadRank takes no influence at all.

Only if none of these cases occurs, an actual combination of PageRank and BadRank - for instance by dividing PageRank through BadRank - would be necessary. In this way, all unwanted effects could be avoided.

A Critical View on BadRank and PR0

How Google would realize the combination of PageRank and BadRank is of rather minor importance. Indeed, a separate computation and a subsequent combination of both has the consequence that it may not be possible to see the actual effect of a high BadRank by looking at the Toolbar. If a page has a high PageRank in the original sense, the influence of its BadRank can be negligible. But if another page links to it, this could have quite serious consequences.

An even bigger problem is the direct reversion of the PageRank algorithm as we have presented it here: Just as an additional inbound for one page can do nothing but increasing this page's PageRank, an additional outbound link can only increase its BadRank. This is because of the addition of BadRank values in the BadRank formula. So, it does not matter how many "good" outbound links a page has - one link to a spam page can be enough to lead to a PR0.

Indeed, this problem may appear in exceptional cases only. By our direct reversion of the PageRank algorithm, the BadRank of a page is divided by its inbound links and single links to pages with high BadRank transfer only a part of that BadRank in each case. Google's Matt Cutts' remark on this issue is: "If someone accidentally does a link to a bad site, that may not hurt them, but if they do twenty, that's a problem." (searchenginewatch.com/sereport/02/11-searchking.html)

However, as long as all links are weighted uniformly within the BadRank computation, there is another problem. If two pages differ widely in PageRank and both have a link to the same page with a high BadRank, this may lead to the page with the higher PageRank suffering far less from the transferred BadRank than the page with the low PageRank. We have to hope that Google knows how to deal with such problems. Nevertheless it shall be noted that, regarding the procedure presented here, outbound links can do nothing but harm.

Of course, all statements regarding how PR0 works are pure speculation. But in principle, the analysis of link structures similarly to the PageRank technique should be the way how only Google understands to deal with spam.

PageRank and Google are trademarks of Google Inc., Mountain View CA, USA. PageRank is protected by US Patent 6,285,999.

The content of this document may be reproduced on the web provided that a copyright notice is included and that there is a straight HTML hyperlink to the corresponding page at pr.efactory.de in direct context.

Urlaub Ägypten

(c)2002/2003 eFactory GmbH & Co. KG Internet-Agentur - written by Markus Sobek